Daniel Demoz
Ethics for Design, AI, and Robotics*
Tesla’s Optimus is a general-purpose humanoid robot designed to perform tasks in both home and workplace settings. Its anthropomorphic design and pervasive data collection raise significant ethical concerns related to privacy, autonomy, and labor displacement. This paper combines a technology ethics case study with a "metaphor hacking" exploration to analyze how cultural narratives shape our understanding of Optimus and propose ethical design modifications.
Tesla’s Optimus (Tesla Bot) is a humanoid robot developed to perform general-purpose tasks. It integrates advanced AI for recognizing faces, interpreting speech, and engaging in conversational reasoning. Optimus is always connected to the cloud, using real-time video and audio input to navigate and learn from its surroundings.
This blend of surveillance capabilities and anthropomorphic design makes Optimus ethically complex. It collects large amounts of personal data while blending into intimate environments. It also physically and cognitively mimics human workers, enabling it to take over a broad range of tasks.
This document combines two complementary analyses:
- A technology ethics case study examining privacy and economic impacts.
- A metaphor hacking exploration of how cultural narratives shape our understanding of Optimus.
Optimus operates as an always-on presence, capturing audio and visual data within personal and professional environments. Its camera and microphone, combined with AI processing in the cloud, create a persistent surveillance system embedded in everyday life.
"When a machine looks human, we may treat it more like a companion than a device." — Kate Darling [1]
This emotional closeness can blur boundaries and make users less vigilant about what information they share. Ryan Calo frames robots as "legal metaphors," arguing they challenge existing concepts of agency and privacy [2]. Optimus doesn’t just monitor like a camera on a wall; it interacts, adapts, and occupies physical space.
The robot is marketed as a general-purpose worker capable of performing a wide range of physical tasks. While promising for efficiency, this versatility positions Optimus as a direct competitor to human workers, especially those in low-wage or physically demanding jobs.
The ethical risk is not hypothetical. Technological language often masks the real consequences of innovation [3]. When we frame robots like Optimus as "helpers," we avoid acknowledging that their deployment may eliminate livelihoods.
One way to understand Optimus is through the metaphor of the servant or helper. This fits well with Tesla’s description of the robot as something meant to take on “dangerous, repetitive, and boring tasks”.
This metaphor brings to mind a tool designed to make life easier—whether in the workplace or at home, akin to Rosie the Robot from The Jetsons or the assistants in I, Robot.
This way of seeing the robot emphasizes values like usefulness, control, and safety. It makes us expect obedience and predictability. But it also risks downplaying the complexity and growing adaptability of modern AI.
Another way to think about Optimus is as a child or learner. Tesla has said the robot will improve over time through updates and experience. Like a child, it will need feedback, training, and a supportive environment to become more capable.
This idea is echoed in films like A.I. Artificial Intelligence and Chappie. Seeing Optimus as a digital child makes us consider its potential and its need for guidance.
![Robot as a Servant](assets/robot-servant.webp Figure 1: Humanoid robot as a servant. Image generated using OpenAI Sora.[6]
![Robot as a Child](assets/robot-child.webp Figure 2: Humanoid robot as a child. Image generated using OpenAI Sora.[6]
While both metaphors offer useful perspectives, the child or learner metaphor provides a more forward-looking and ethically sound way of thinking about Optimus. It matches how AI works today—constantly improving through interaction and feedback.
Seeing the robot as a learner also places responsibility on the user. It supports a more realistic, adaptive relationship that can grow over time. And it makes room for trust to be earned rather than assumed.
To address the ethical concerns, two design modifications are proposed:
- Allows users to control and visibly track when the robot is recording.
- Aligns with Nissenbaum’s idea that privacy is shaped by social context and appropriate information flow.
- Analyzes local employment data to estimate the social impact of deploying Optimus.
- Generates a report suggesting alternatives like phased automation or retraining.
Together, these features embed ethical reflection into Optimus’s operation, encouraging transparency and foresight.
One possible objection to the Human Impact Forecast Module is that it may slow innovation and burden businesses with regulatory-like barriers.
Response: The goal is not to prevent progress but to guide it responsibly—balancing innovation with inclusion and accountability. Ethical foresight must be calibrated so that it informs innovation, not strangles it.
Tesla’s Optimus represents a significant step toward human-robot integration. Its ethical implications require thoughtful design and cultural framing. By incorporating modules like the CCM and HIFM, and by adopting metaphors that emphasize growth and responsibility, we can foster a future where humans and robots collaborate safely and equitably.
[1] K. Darling, "Who's Johnny?" in We Robot, 2015. [2] R. Calo, "Robots as Legal Metaphors," Harvard Journal of Law & Technology, vol. 30, no. 2, 2017. [3] M. Jones and J. Millar, "Hacking Metaphors," unpublished manuscript, 2023. [4] Tesla, “Tesla AI Day 2022,” Tesla, Aug. 2022. [5] Various film references: I, Robot, The Jetsons, A.I. Artificial Intelligence, Chappie. [6] OpenAI, "Image of humanoid robot in domestic setting," generated using Sora, OpenAI, Jun. 2025.