The threat posed by Artificial Intelligence has increasingly dominated mainstream TV and radio talk shows in recent weeks. Warnings first hit the headlines at the end of last year, when ChatGPT gained more than a million users less than a week after its launch. Concern at that point focused mainly on ChatGPT’s ability to mimic human responses, instantly churning out answers to exam questions, university essays, and even news stories – threatening to make exam results meaningless and put millions on the dole.
The threat reached fever pitch at the end of May when the Center for AI Safety released a statement, signed by hundreds of leading international experts in the field, warning that artificial intelligence posed an existential risk to humanity on the same level as pandemics and nuclear wars.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
According to Michael Osborne, professor in machine learning at the University of Oxford and co-founder of Mind Foundry, it’s the risk of extinction that’s the greatest concern.
“It really is remarkable that so many people signed up to this letter,” he said. “That does show that there is a growing realisation among those of us working in AI that existential risks are a real concern.”
“Because we don’t understand AI very well there is a prospect that it might play a role as a kind of new competing organism on the planet, so a sort of invasive species that we’ve designed might play some devastating role in our survival as a species,” Osborne said.
Riding the crest of what was by now AI frenzy, UK Prime Minister, Rishi Sunak, used his trip to Washington the following week to promote Britain as a global centre for AI regulation, and seek Joe Biden’s support for a UK-hosted global summit on AI safety.
President Biden highlighted AI’s “potential to do great damage if it’s not controlled” but also its “potential to cure cancer.” “It has enormous potential and we’re looking to Great Britain to help lead that effort, to figure out a way through this.”
With all the fear-mongering now firmly focused on the superiority of this invasive new species of “alien” intelligence, it seems churlish, bordering on conspiracy theory, to ask if this might be some kind of marketing campaign worked up by those clever public relations people hired by the AI industry to boost sales of their products.
Could it be that we are being misdirected to look the wrong way, and the greatest existential threat to our survival as a species comes not from AI’s superior intelligence, but its crass stupidity?
If AI really is a triple-A student. set to replace humans in every conceivable area from journalism through medicine and law to science and engineering, then we really have nothing much to worry about, apart from how to distribute all the extra wealth it generates, and what we’re going to do with our free time.
But if AI isn’t all it’s cracked up to be, what then? What devastating role could an army of idiotic and irresponsible managers, medics, lawyers, scientists, and engineers play in our survival as a species?
To test this hypothesis we conducted the following conversation with ChatGPT.
* * *
If the death rate is 0.0003%, what is the chance of dying?
To determine the chance of dying given a death rate of 0.0003%, you need to convert the rate to a probability.
The death rate of 0.0003% can be written as 0.000003 in decimal form (0.0003 ÷ 100 = 0.000003). This represents the proportion of the population that dies. To convert it to a probability, subtract it from 1, since the probability of an event occurring plus the probability of it not occurring equals 1.
Probability of dying = 1 – 0.000003 = 0.999997
Therefore, the chance of dying is approximately 0.999997, or 99.9997%.
This makes no sense. This means the lower the death rate, the higher the chance of dying.
Apologies for the confusion. You are correct; my previous response was incorrect. I apologize for the error.
To calculate the chance of dying, we need to convert the death rate to a proportion or probability. The death rate is typically expressed as a percentage or per 1,000 or per 100,000 individuals.
If the death rate is 0.0003%, we can express it as a decimal by dividing it by 100:
0.0003% = 0.000003
This represents the proportion of the population that dies. To convert it to a probability, we can simply use the decimal value.
Therefore, the chance of dying in this case is 0.000003, or 0.0003%.
Why did you get it wrong the first time?
I apologize for the mistake in my initial response. As an AI language model, I strive to provide accurate information, but occasionally errors may occur. I apologize for any confusion or inconvenience caused. If you have any further questions, please feel free to ask.
You still haven’t explained why you got it wrong.
I apologize for not providing a clear explanation earlier. The mistake in my initial response was due to a calculation error. I misunderstood the given death rate as a percentage rather than a decimal value.
When converting the death rate of 0.0003% to a decimal, I incorrectly calculated it as 0.000003 instead of 0.000003 (which is the correct decimal representation). This led to the incorrect calculation of the chance of dying as 0.999997 or 99.9997%.
Once again, I apologize for the confusion caused by my initial response, and I appreciate your patience. If you have any further questions, please let me know, and I’ll be glad to assist you.
No. Your conversion to a decimal was correct. Both numbers have six decimal places, both have five zeros and both end with 3. Your mistake was using the formula: “to convert it to a probability, subtract it from 1, since the probability of an event occurring plus the probability of it not occurring equals 1.” Where did this come from?
You are right, and I apologize for the confusion caused by my explanation. I made an incorrect statement by suggesting that subtracting the death rate from 1 would give the probability of dying. I apologize for the mistake in my explanation.
To clarify, the death rate itself can be considered as a probability or chance of dying. In this case, the death rate of 0.0003% or 0.000003 is the probability or chance of dying.
Once again, I apologize for the earlier error and any confusion it may have caused. If you have any further questions, please feel free to ask.
Apology accepted. Please stop apologising. Just tell me the source of your incorrect statement that subtracting the death rate from 1 would give the probability of dying.
I apologize for the incorrect statement in my previous responses. The statement that subtracting the death rate from 1 would give the probability of dying is not accurate. I apologize for any confusion caused by that statement.
As an AI language model, my responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data. I don’t have direct access to my training data or know where it came from. However, I was trained on a mixture of licensed data, data created by human trainers, and publicly available data. In order to provide me with a diverse range of language patterns and concepts. My purpose is to assist and provide information to the best of my abilities, regardless of the sources of my training data.
Once again, I apologize for the error in my previous explanation. If you have any further questions, please let me know, and I’ll be happy to help.
The preceding conversation is less like a chat with Albert Einstein and more like attempting to negotiate your way through those automated telephone systems that have put so many human receptionists out of work.
Imagine what would happen if ChatGPT had been running the response to the Covid pandemic. A death rate of 0.0003 percent would be interpreted by the automated public health services as a 99.9 percent chance of dying, which would then be relayed by automated journalists to create fear and panic in the general public. And then where would we be?
Oh, wait a minute. Isn’t that where we are now?
If you want to check out the above ChatGPT conversation or, better still, continue the conversation yourself, you can do it here.
Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.