Hitting the Books: What do we want our AI-powered future to look like?
As robots end up being more lifelike, people (and potentially machines) need to upgrade regulations, societal standards, and standards of organizational and individual habits. A non-binary, nuanced assessment of robots and AI, with attention to who is setting them, does not mean enduring a distortion of how we specify what is human. Providing robots the ability to indiscriminately eliminate innocent civilians with no human guidance or deploying facial acknowledgment to target minorities is unacceptable. For the time being, duty lies with the people producing, programming, selling, and deploying robots and other types of AI– whether it’s David Hanson, a medical professional who uses AI to identify cancer, or a developer who develops the AI that assists make immigration choices. Numerous people are in difficult situations where human help is offered or not safe, whether due to cost, being in a separated or conflict zone, insufficient human re-sources, or other reasons.
As robots become more lifelike, people (and perhaps devices) need to update policies, societal standards, and requirements of organizational and specific habits. For the time being, duty lies with the humans producing, shows, selling, and deploying robotics and other types of AI– whether it’s David Hanson, a medical professional who uses AI to diagnose cancer, or a developer who establishes the AI that assists make immigration decisions. Numerous individuals are in challenging scenarios where human help is not safe or available, whether due to cost, being in a separated or conflict zone, inadequate human re-sources, or other factors.