Okay now let’s ask it to write anti ransomware. My guess is it will help with that too. And then the balance is struck and the obvious becomes obvious: AI is a tool to enhance all aspects of our lives. But instead we seem to only hear about the ways we should be fearful and worried about it.
Well then I'll just ask it to write a program to make a program to ... Un-enhance .. De-enhance?... Our lives and end it with "NO TAKE BACKS" and then we're all fucked
The list of features for the "ransomware" example he had ChatGPT help him write is almost identical to the list of features that would be present in a piece of software that was designed to help dissidents protect sensitive information from an oppressive government.
This is like saying “you can write ransomeware in C++” and implying it is the language’s fault, somehow. ChatGPT is a tool, it does what the user asks it too (or it should, anyway). It has no agency nor morals. The argument that it makes it easier to write ransomware is silly as well. From high level languages to libraries and IDEs, we’ve always been developing tools to make programming easier! This is just the latest iteration.