Have you gotten a sense that in the field of robotics we are going to run into a morality "wall" of a sort, due to progressives in charge killing any good idea? In other words, you can make an air-gapped robot know about every corruption in the world, but the progressives in charge then become that robot's permanent enemy. How will the non-corporate open-source robot wizards break that "wall of stupid progressives" and have the you and the robot live in peace?
In Buffalo: Where Good Ideas Go to Die, by Jim Ostrowski, CEO of LibertyMovement.org, Jim defines progressivism as "...a self-imposed mental disability whereby the progressive shuts out any and all thoughts, ideas, logic or evidence, that government has failed or is not the solution to all human problems." How will we plan to build morality into a robot in a world run by secret-symbol stupids, hell bent on destroying everyone and everything not connected to itself for the sake of a dollar?
In this article Letting policymakers handle the trolley problem, which talks about the robotics and the trolley problem, i.e. "what will the car do if it has to choose between killing one person or another". Is the a progressive on the robotic driving ethics panel? How will you know?
Moral Machines: Teaching Robots Right from Wrong, by Wendell Wallach and Colin Allen, discusses how machines can act in a morally defensible way.
Do you get a sense that the only reason Neurorobotics will be allowed to advance is because progressives found a way to control the machines? And secretly demanded a kill-switch that they control to remove humans they don't like? What kind of technology advancement is that? It's taking a step backwards in technology.
There is all sorts of robot learning: Inverse Reinforcement learning, Robot learning by demonstration, and
Social interactive robot navigation based on human intention analysis from face orientation and human path prediction. But we should also remember How Technology Hijacks People’s Minds.
There is lots of talk about Information Technology and Moral Values, but how will we teach a robot how to tie its knowledge to Native Americans?
Yeah, suppose there was a vastly different robot, with a way of moving through nature that is in harmony with Native Americans. Physical robot parts would have to be tied to these basic human ethics:
Native Voices of First Nations Peoples
Native American Code Of Ethics : Pearls of Wisdom
Now suppose we packed moving parts inside stuffed animals, like a bear, and made it air-gapped with no link to the web, with an open-source StuffNix OS similar to Tails, specially modified to be an "I'll take it under advisement" security setting, so that no code runs, no files are stored, and nothing is deleted just because some other code says so. And had the best EMF and RF shielding such as Almute or brass screening to make it EMF and RF neutral. Even if you did develop a way for the robot to move through nature that would make any member of the good-Anonymous happy, you'll always have that Wall of Stupid-Progressives that the robot will have to deal with. In this StuffNix world, nature technology people will shout HELL NO at the thought of having a kill switch controlled by progressives who love death. And in the StuffNix OS, you'd have to have a fallback "Turn To Stone" setting so the parts turn to goo if someone steals a robot arm or leg.
In other words, any advanced robot you see in the future made by a Incorporated company will always be suspect. Making a robot that takes the high road can be done, but you can't teach it how to move by saying "Don't do X, don't do Y, and don't do Z." Teaching a high road robot about the Not-World will never fly with spiritually advanced people. Even if a blissful future happens, having some robot with the Law of One will help treat people addicted to technology, because without a cell phone, a Law of One robot will most likely be the only thing with which they'll interact.
Remember what I typed here in five years, because these issues will not go away. How will you effectively fight transhumanism?
Let the robots be robots, but let the humans be human.
You can say I don't know what I'm talking about, but are you sure an engineer with a gun in his back loves humanity any better?