Emily Hanley says she and other out-of-work copywriters are only the first wave of AI collateral and calls the collapse of her profession the "tip of the AI iceberg."
People had these same concerns are troubles during the industrial revolution, when machines started to work better, faster, and cheaper than human labor doing the same job. Is there going to be a serious upheaval in labor again? Yup. Is it a bad thing for the world? In some ways yes, in other ways no.
The industrial revolution has done horrible things to the global environment. At the same time, many more people are much better off today than they were in the early 19th century.
It's not better yet, or for everything (arguably not for most things), and the first forays into mechanization of industry weren't, either. We're at the very beginning here.
Which is actually not a big difference to what companies have done the past couple of decades, namely moving positions from high-cost to low-cost countries. Cost for an AI is problably easier to mask in the balance sheet as well, as costs for human resources.
AI is already better... than some people. A human using AI is probably better and faster at certain tasks than a somewhat skilled human is.
I bet midjourney is better at making concept art than the vast majority of the population.
I think we have a high threshold for success of AI. I saw a video a while back about how AlphaGo (an AI designed for playing Go) was able to beat a whole bunch of experts in Go. One expert used an atypical move and beat AlphaGo. People started reacting like "see? AI isn't impressive. This genius beat it." How many of us are geniuses? How often will geniuses beat better AI?
This is not like the industrial revolution. You really should examine why you think "we figured other things out in the past" is such an appealing narrative to you that you're willing to believe the reassurance it gives you over the clear evidence in front of you. But I'll just quote Hofstadter (someone who has enough qualifications that their opinion should make you seriously question whether you have arrived at yours based on wishful thinking or actual evidence):
"And my whole intellectual edifice, my system of beliefs... It's a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed. It felt as if not only are my belief systems collapsing, but it feels as if the entire human race is going to be eclipsed and left in the dust soon. People ask me, "What do you mean by 'soon'?" And I don't know what I really mean. I don't have any way of knowing. But some part of me says 5 years, some part of me says 20 years, some part of me says, "I don't know, I have no idea." But the progress, the accelerating progress, has been so unexpected, so completely caught me off guard, not only myself but many, many people, that there is a certain kind of terror of an oncoming tsunami that is going to catch all humanity off guard."
Bald-faced appeal to authority, okay. With a side of putting words in my mouth that I clearly did not say.
The industrial revolution destroyed some jobs, and created others. Destroyed some industries, and created others. We've been in an "information revolution" for some time, where electronic computers have supplanted human computers, and opened up an enormous realm of communication, discovery, and availability of information to so many more people than ever before in history. This is simply true.
Just as the landscape of human physical labor was forever changed by the industrial revolution, the landscape of human thinking labor will continue to be forever changed by this information revolution. AI is a potential accelerator of this information revolution, which we are already seeing the impacts of, even at this extremely early stage in the development of AI. There will be both good and bad outcomes.
You understand that the fallacy is the appeal to false authority, right? Not just any authority?
Swinging the partial names of logical fallacies around like a poorly wielded shield isn't actually an argument. It's just an attempt to poison the well.
Appealing to authority is useful. We all do it every day. And like I said, all it should do is make you question whether you've really thought about it enough.
Every single thing you're saying has no bearing on how AI will turn out. None.
If a 0 is "we figured it out" and 1 is "we go extinct", here is what all possible histories look like in terms of "how things that could have made us go extinct actually turned out":
1
01
001
0001
00001
000001
0000001
00000001
etc.
You are looking at 00000000 and assuming there can't be a 1 next, because of how many zeroes there have been. Every extinction event will be preceded by a bunch of not extinction events.
But again, it is strange that you can label an appeal to authority, but not realize how much worse an "appeal to the past" is.
Nope. I certainly have. It's the same arguments I've been hearing from people dismissing AI alignment concerns for 10 years. There's nothing new there, and it all maps onto exactly the wishful thinking I'm talking about.
Absolutely agree. We all have a strong drive to feel that what we do is unique and special, but that doesn't make it true. From the mundane to the artistic, AI already can do a large amount of what people do, and there's every reason to believe that AI's abilities will grow quickly and will surpass humans abilities. Based on the evidence it looks like this is gonna happen within the next few years - like within 5.
When AI is able to replace most jobs, as a society what do we do when there are no jobs for the large majority of people? Humanity is going to go through a tough upheaval more disruptive than anything ever before. We're gonna have to figure out how to completely reorganize how we exist, what we do in our daily lives, and how we think of ourselves as a species.