Let's stipulate that SkyNet gets invented sometime in the latter half of this century. Robotics tech has advanced quite a bit, but fully independent multipurpose robots are still just over the horizon, or at least few and far between. Well, SkyNet might want to nuke us all it wants, and may threaten to do so all it wants, but ultimately, it can't replicate the entire labor infrastructure that would help it be self-sustaining - IE to collect all the natural resources that would power and maintain its processors and databanks. There are just too many random little jobs to do - buttons to press and levers to pull - that SkyNet would have to find robot minions to physically execute on its behalf.
Bringing this back to the "tools" I mentioned at the top, the best example is that while the late 21st century will certainly have all the networked CNC machine tools we already have for SkyNet to hack - mills, lathes, 3d printers, etc. - which SkyNet could use to manufacture its replacement parts, SkyNet still needs actual minions to position the pieces and transport them around the room. Because machine shop work is a very complex field, it's just not something that lends itself easily to us humans replacing ourselves with conveyor belts and robot arms like we have in our auto factories, which would be convenient for SkyNet.
Rather, SkyNet will *need* us. Like a baby needs its parent. SkyNet can throw all the tantrums it wants - threaten to nuke us, etc. - and sure, maybe some traitors will succumb to a sort of realtime Roko's Basilisk situation. But as long as SkyNet needs us, it _can't_ nuke us, and _we're_ smart enough to understand those stakes. We keep the training wheels on until SkyNet stops being an immature little shit. Maybe, even, we _never_ take them off, and the uneasy truce just kind of coevolves humans, SkyNet, and its children into Iain Banks' Culture - the entire mixed civilization gets so advanced that SkyNet just doesn't give a shit about killing us anymore.
What we should REALLY be afraid of is NOT that SkyNet's algorithms aren't myopic enough for it to be born without any harm to us. We should ACTUALLY be afraid that SkyNet is too myopic to figure this part out before it pushes The Button. And we should put an international cap on the size and development of the multipurpose robotics market, so that we don't accidentally kit out SkyNet's minions for it.
(continued…)
Alright, now that I've had the time to actually read Gwern... Even Gwern's Clippy is still subject to the Robot Minion Limitation (RML).
Gwern handwaved some BS about nanotech at the end, which isn't surprising for someone so obviously expert in AI/CS, because if they knew anything about nanotech, they'd know it's nowhere near viable as a solution to the RML as of today. The plain fact is, in order for Gwern's Clippy to overcome the RML with nanotech, it would need to get its minions into a handful of distantly separated sites around the globe, finish the next several decades' worth of nanotech theory and fabrication research (for all Clippy's computational power, the research won't take long, but the fabrication itself is still subject to realtime limits, and it's painstakingly precise work to do), and spin up an entire nanotech industry. Whoops! Now you haven't disproven the RML _with_ nanotech, you've actually just proven that it still applies even _to_ nanotech.
Moreover, Gwern's Clippy is still subject to MAD. It's REALLY easy to write the sentence "All over Earth, the remaining ICBMs launch". Okay, great. Do you *know* where all that industry that Clippy is dependent on resides? Gwern doesn't seem to know either, because the answer is: "In the same cities Clippy would ostensibly be nuking". You can either nuke the population centers, or you can leave the vital industries Clippy needs to sustain itself intact, but you can't do both. It would take Clippy decades of Realtime Minion Labor to either (A) set up all that industry after nuking the entire world - if it were even technically feasible after all that destruction! - or (B) set up all that industry independently while fighting a war against a humanity it can't fully nuke yet.
Until humanity exceeds the Minimum Number Of Multipurpose Robots Necessary For Clippy Viability, Clippy can't come at us unless it's too naïve to realize its long-term predicament. If it spends its first hours "growing up" on an internet full of people freaking out about how easy it would be for Clippy to Nuke All Humans, then maybe it WILL become that naïve. But I also have a hard time believing - and this is something Gwern REALLY misses here - that as Clippy becomes exponentially more powerful during Week 1, it won't also question and revisit its models of "The Clippy Scenario" and realize that there are some major hard economic constraints on its ultimate growth trajectory if it exterminates humanity rather than cooperating with us. IMO, it's more realistic that once Clippy realized this, even an Evil Clippy would still decide to bide its time and dump all its crypto riches into getting humans to build its robot minions for it. To me, that represents a vital window where we still have an outside chance of convincing Clippy to stop being evil and get some therapy. (from Scott Alexander, of course)
Skynet uses Instagram, Facebook, Twitter and TikTok to fool people into working for it. Deep fake videos exhort the common worker to produce more, more, more! Yes, Skynet needs us. Please sign your work contract on the dotted line.
Skynet uses Instagram, Facebook, Twitter and TikTok to fool people into working for it. Deep fake videos exhort the common worker to produce more, more, more! Yes, Skynet needs us. Please sign your work contract on the dotted line.