This paper examines the ongoing challenges of interdisciplinary collaboration in Machine Ethics (ME), particularly the integration of ethical decision-making capacities into AI systems. Despite increasing demands for ethical AI, ethicists often remain on the sidelines, contributing primarily to metaethical discussions without directly influencing the development of moral machines. This paper revisits concerns highlighted by Tolmeijer et al. (2020), who identified the pitfall that computer scientists may misinterpret ethical theories without philosophical input. Using the MACHIAVELLI moral benchmark and the Delphi artificial moral agent as case studies, we analyze how these challenges persist. Our analysis indicates that the creators of MACHIAVELLI and Delphi “copy” ethical concepts and embed them in LLMs without questioning or challenging these concepts themselves sufficiently. If an ethical concept causes friction with the computer code, they only reduce and simplify the ethical concept in order to stay as close as possible to the original. We propose that ME should expand its focus to include both interdisciplinary efforts that embed existing ethical work into AI, and transdisciplinary research that fosters new interpretations of ethical concepts. Interdisciplinary and transdisciplinary approaches are crucial for creating AI systems that are not only effective but also socially responsible. To enhance collaboration between ethicists and computer scientists, we recommend the use of Socratic Dialogue as a methodological tool, promoting deeper understanding of key terms and more effective integration of ethics in AI development.