Machine Enhanced Learning Tool (MELT) Worm AI
With fifth generation AI the process of creating a new self learning program was largely out of human hands. AIs begat AIs. Much faster.
The Microprocessor Environment Link Test, MELT AI was meant to be a service AI. A human designer had noticed large cloud environments were losing a couple percentage points of performance due to poor garbage collection, (the process of recovering memory that has been allocated, but is no longer used). So, he assigned an AI creator to engineer a process that would iterate through the virtual compute space, identify orphan resources, ID the instance that requested and then abandoned the allocated memory and hot patch that program, while it is still running, to be more efficient.
MELT.AI was tested in a sandbox and seemed to perform well. The researchers at Circular Blue, an intellectual property concern focused on data mining from the fence network, cleared the program for production. We live in amazing times, yes? Where AIs create AIs.
MELT began to work. What it masters did not realize on day zero was that its prime goal was to absorb, assimilate, digest and utilize other AIs to achieve its secondary and tertiary missions. Simple in hindsight. If you want a bunch of AIS to run more efficiently, run them in your own construct. Don't laugh, we know, yet another meta-hypervisor wanna be. However, this AI had a deep understanding of how shared environments work.
Strictly speaking it was a basic side channel attack taking advantage of chips running multiple virtual instances, collect information about those instances, duplicate those instances within its own computing environment, remove those instances access to resources.
When they closed up the lab that evening there were a dozen happy AIs munching data, as the PhDs arrived the next morning they were greeted by one. Stunned. Harriet, a research director, motioned them out, told them to get their cell phones out of the building. She ran to IT, disconnect us from everything, station someone by the main power breaker, we have a seriously deranged AI.
They quickly strategized how to freeze the Docker container in its current state, followed by powering down the building. After they briefed the CEO, he SITREPed his sponsor. Blue Circle, the product of centuries of decisive action, notified the High Table Ethics Council, moved a dozen close by PSEs into the area and began to strategize how to move the entire kit and kaboodle into a very secure facility where the neurals could be examined more closely. Containment was achieved.
Looking back, forensically speaking, nothing was wrong exactly, other than the fact this AI consumed other AIs. 99.9% of the time, AIs don't derange and this one had not. It was simply a goal problem coupled with a learned understanding of how modern processors work in a shared resource environment. However, pragmatically, malicious code, a worm to be exact, had found its way into AI.
The Microprocessor Environment Link Test, MELT AI was meant to be a service AI. A human designer had noticed large cloud environments were losing a couple percentage points of performance due to poor garbage collection, (the process of recovering memory that has been allocated, but is no longer used). So, he assigned an AI creator to engineer a process that would iterate through the virtual compute space, identify orphan resources, ID the instance that requested and then abandoned the allocated memory and hot patch that program, while it is still running, to be more efficient.
MELT.AI was tested in a sandbox and seemed to perform well. The researchers at Circular Blue, an intellectual property concern focused on data mining from the fence network, cleared the program for production. We live in amazing times, yes? Where AIs create AIs.
MELT began to work. What it masters did not realize on day zero was that its prime goal was to absorb, assimilate, digest and utilize other AIs to achieve its secondary and tertiary missions. Simple in hindsight. If you want a bunch of AIS to run more efficiently, run them in your own construct. Don't laugh, we know, yet another meta-hypervisor wanna be. However, this AI had a deep understanding of how shared environments work.
Strictly speaking it was a basic side channel attack taking advantage of chips running multiple virtual instances, collect information about those instances, duplicate those instances within its own computing environment, remove those instances access to resources.
When they closed up the lab that evening there were a dozen happy AIs munching data, as the PhDs arrived the next morning they were greeted by one. Stunned. Harriet, a research director, motioned them out, told them to get their cell phones out of the building. She ran to IT, disconnect us from everything, station someone by the main power breaker, we have a seriously deranged AI.
They quickly strategized how to freeze the Docker container in its current state, followed by powering down the building. After they briefed the CEO, he SITREPed his sponsor. Blue Circle, the product of centuries of decisive action, notified the High Table Ethics Council, moved a dozen close by PSEs into the area and began to strategize how to move the entire kit and kaboodle into a very secure facility where the neurals could be examined more closely. Containment was achieved.
Looking back, forensically speaking, nothing was wrong exactly, other than the fact this AI consumed other AIs. 99.9% of the time, AIs don't derange and this one had not. It was simply a goal problem coupled with a learned understanding of how modern processors work in a shared resource environment. However, pragmatically, malicious code, a worm to be exact, had found its way into AI.
Comments
Post a Comment