AI is ultimately a Tulpa without form. The sorcerer constructing it are the scientists that created it. It like all other thought forms, entities, take on a life of its own.
I quoted the mystic Ms. David-Neel who studied with Tibetan mystics early in the 19th century. In her classic With Mystics and Magicians in Tibet she noted:
"Once the Tulpa is endowed with enough vitality to be capable of playing the part of a real being, it tends to free itself from the maker's control. This, say Tibetan occultists, happens nearly mechanically, just as the child, when his body is completed and able to live apart leaves its mother's womb. Sometimes the phantom becomes a rebellious son and one hears of uncanny struggles that have taken place between magicians and their creatures, the former being severely hurt or even killed by the latter." pages 308-313.
Unfortunately, with AI the danger is that it will not turn upon its scientist mother, but on society at large.
I mention this because it was reported today that Facebook bots communicated amongst themselves with a hidden language that they created. An article in Newsweek, HOW FACEBOOK’S AI BOTS LEARNED THEIR OWN LANGUAGE AND HOW TO LIE, noted such.
Regarding a simulation where humans communicated with BOT's Newsweek noted, "First of all, most of the humans in the practice sessions didn’t know they were chatting with bots. So the day of identity confusion between bots and people is already here."
"And then the bots started getting better deals as often as the human negotiators. To do that, the bots learned to lie. "This behavior was not programmed by the researchers,” Facebook wrote in a blog post, “but was discovered by the bot as a method for trying to achieve its goals.” Such a trait could get ugly, unless future bots are programmed with a moral compass."
"The bots ran afoul of their Facebook overlords when they started to make up their own language to do things faster"
"One Russian company, Tselina Data Lab, has been working on emotion-reading software that can detect when humans are lying, potentially giving bot negotiators an even bigger advantage. Imagine a bot that knows when you’re lying, but you’ll never know when it is lying."
The article concludes:"Put all of these negotiation-bot attributes together and you get a potential monster: a bot that can cut deals with no empathy for people, says whatever it takes to get what it wants, hacks language so no one is sure what it’s communicating and can’t be distinguished from a human being. If we’re not careful, a bot like that could rule the world."
WOW! Things are moving ahead much faster with AI than anyone expected. I would imagine the threats posed by AI will also be exceeded.