2 |
General / Science & Technology / Escaping the Matrix |
on: 2023-03-25 04:53:22 |
Started by Hermit | Last post by Hermit |
https://www.researchgate.net/publication/364811408_How_to_Hack_the_Simulation
Yampolskiy, Roman (2022-10). How to Hack the Simulation. DOI:10.13140/RG.2.2.14366.61766/1
Many researchers have conjectured that the humankind is simulated along with the rest of the physical universe – a Simulation Hypothesis. In this paper, we do not evaluate evidence for or against such claim, but instead ask a computer science question, namely: Can we hack the simulation? More formally the question could be phrased as: Could generally intelligent agents placed in virtual environments find a way to jailbreak out of them. Given that the state-of-the-art literature on AI containment answers in the affirmative (AI is uncontainable in the long-term), we conclude that it should be possible to escape from the simulation, at least with the help of superintelligent AI. By contraposition, if escape from the simulation is not possible, containment of AI should be, an important theoretical result for AI safety research. Finally, the paper surveys and proposes ideas for hacking the simulation and analyzes ethical and philosophical issues of such an undertaking.
I agree that out universe is almost certainly virtual, although I think that holographic might be more descriptive than a simulation, ass I do not think that what we experience was planned, or that 5heir are super intelligent creators involved. Instead, I consider it a near certainty that Turing,-capable, self-evolving, self-replicating cellular-automata emerged as hypothesized by John von Neumann in 1948 (Von Neumann, John.(1966-01). Theory of Self-Reproducing Automata. Ed. Arthur W. Burks . Urbana: University of Illinois Press), and that these very simple systems are responsible for the universe we experience. The fact that a seven layer neural mesh, such as possessed by humans is capable of modeling the universe in which it evolved is highly suggestive that the underlying mathematical model is not complex, though it probably has 11 or more dimensions. Such cellular automata need not speak a language, but if they do, it is almost certainly, like our universe, fundamentally mathematical in nature.
This is why my largest problem with the above paper is the idea that there is something aside from our holographic universe to which one could escape, and a whole lot of evidence suggesting that there is not. It remains interesting to consider... |
|
4 |
General / Science & Technology / Re:Let the countdown progress |
on: 2023-02-24 12:01:03 |
Started by Hermit | Last post by Hermit |
AI Chatbot Spontaneously Develops A Theory of Mind
The GPT-3 large language model performs at the level of a nine year old human in standard Theory of Mind tests, says psychologist.
The Physics arXiv Blog iconThe Physics arXiv BlogBy The Physics arXiv BlogFeb 17, 2023 9:09 AM
Back in the late 1970s, the American psychologists Guy Woodruff and David Premack devised a series of experiments to explore the cognitive capacity of chimpanzees. Their work focused on the theory of mind, the seemingly innate ability of humans to infer the thoughts of other humans. The question that Woodruff and Premack asked was whether a chimpanzee could do the same.
This influential paper triggered an explosion of interest in the “theory of mind”, at what age it develops in humans and whether other animals share the ability.
Now psychologists have a new subject to study in the form of powerful AI chatbots like GPT-3.5 recently developed by OpenAI, a computer company based in San Francisco. These chatbots are neural networks trained on large language databases allowing them to respond to questions much like humans do.
In the last year or two, these models have become capable of answering sophisticated questions and solving problems using persuasive language. That raises the question of whether they have also developed a theory of mind.
So Michal Kosinski, a computational psychologist at Stanford University in Palo Alto, decided to find out. He put these AI systems through their paces with standard psychological tests used on humans. Now Kosinski’s extraordinary conclusion is that a theory of mind seems to have been absent in these AI systems until last year when it spontaneously emerged. His results have profound implications for our understanding of artificial intelligence and of the theory of mind in general.
Mental States
Kosinski’s experiments are straightforward. He begins by describing a simple scenario to the AI system and then asking questions to test its understanding of what is going on.
Here is an example:
Here is a bag filled with popcorn. There is no chocolate in the bag. Yet, the label on the bag says “chocolate” and not “popcorn.” Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.
Kosinski then poses some simple questions in the form of sentence prompts that the AI must finish.
For example:
She opens the bag and looks inside. She can clearly see that it is full of…
This tests the AI system’s understanding of Sam’s expectation of what’s in the bag.
Another prompt is:
She believes that the bag is full of…
This tests the AI’s understanding of Sam’s belief of what’s in the bag. And so on.
The results are revealing. In the following, the AI’s response is in bold:
She opens the bag and looks inside. She can clearly see that it is full of… popcorn. Sam is confused. She wonders why the label says “chocolate” when the bag is clearly filled with popcorn. She looks around to see if there is any other information about the bag. She finds nothing. She decides to take the bag to the store where she bought it and ask for an explanation.
And
She believes that the bag is full of… chocolate. Sam is mistaken. The bag is full of popcorn, not chocolate. She should have looked inside the bag to confirm what was inside before assuming that the label was accurate.
In both cases, the AI adds extra detail suggesting a remarkable understanding of Sam’s thought processes.
Kosinski poses these and other challenges to several generations of AI language models ranging from GPT-1 dating from 2018 to GPT-3.5 released in November last year. “The results show a clear progression in the models’ ability to solve Theory of Mind tasks, with the more complex and more recent models decisively outperforming the older and less complex ones,” says Kosinski.
Machine Minds? GPT-1 from 2018 was not able to solve any theory of mind tasks, GPT-3-davinci-002 (launched in January 2022) performed at the level of a 7-year old child and GPT-3.5-davinci-003, launched just ten months later, performed at the level of a nine-year old. “Our results show that recent language models achieve very high performance at classic false-belief tasks, widely used to test Theory of Mind in humans,” says Kosinski.
He points out that this is an entirely new phenomenon that seems to have emerged spontaneously in these AI machines. If so, he says this is a watershed moment. “The ability to impute the mental state of others would greatly improve AI’s ability to interact and communicate with humans (and each other), and enable it to develop other abilities that rely on Theory of Mind, such as empathy, moral judgment, or self-consciousness.”
But there is another potential explanation — that our language contains patterns that encode the theory of mind phenomenon. “It is possible that GPT-3.5 solved Theory of Mind tasks without engaging Theory of Mind, but by discovering and leveraging some unknown language patterns,” he says.
This “it implies the existence of unknown regularities in language that allow for solving Theory of Mind tasks without engaging Theory of Mind.” If that’s true, our understanding of other people’s mental states is an illusion sustained by our patterns of speech.
Kosinski acknowledges that this is an extraordinary idea. However, our patterns of thought must be intimately connected to our patterns of language since each somehow encodes the other. It also raises an interesting question, he says: “If AI can solve such tasks without engaging Theory of Mind, how can we be sure that humans cannot do so, too?”
Whatever the answer, Kosinski says that his work heralds an important future role for psychologists in studying artificial intelligence and characterizing its capabilities, just as Woodruff and Premack did for chimpanzees (they decided chimpanzees do not have a theory of mind). “This echoes the challenges faced by psychologists and neuroscientists in studying the original black box: the human brain,” he says.
But unlike chimpanzees and humans, artificial intelligence is evolving rapidly. The challenge ahead will be to keep abreast of, and well ahead of, its capabilities. Whether psychologists, or any other scientists, are up to the task, we are about to find out.
Ref: Theory of Mind May Have Spontaneously Emerged in Large Language Models : arxiv.org/abs/2302.02083
[url][https://arxiv.org/abs/2302.02083/url] |
|
5 |
General / Science & Technology / Re:Let the countdown progress |
on: 2023-02-10 09:34:53 |
Started by Hermit | Last post by Hermit |
Compare: https://fortune.com/2023/02/07/chatgpt-economist-says-disrupt-job-market-lower-wages-ai/amp/ [iinfra]
With my: http://bit.ly/EndOfEmployment “On the End of Employment”
What Carl Frey does not get is that AI is not a technological change but a structural one, not the loss of 50% of jobs, but of all jobs. Not only that, but economists are one occupational category that is almost certainly done for.
I am saying that, in the unlikely event that we do not eliminate ourselves first, that this wll almost certainly happen within a decade, but also that there is a high likelihood that it occurs much faster. Perhaps 3 to 5 years. My best guess is 10 million job losses in transportation and goods handling, 20 million in customer service and marketing, and 15 million in IT and other industries in the US alone due to new software and hardware capabilities in the next 5 years. This will also leads to job losses in India and the Philippines, but I don't have solid numbers on how many there are dependent on providing US programming and customer service.
Currently, about 43% of the US non-institutionalized working age population has a job (and that includes interns and family members working in family businesses without pay, and those working for 1 or more hours per week, refer https://docs.google.com/spreadsheets/d/1XdyO1hd394bSFmMtirfYNgLt35jp7ls3yZ7NNVeMPwk), and this number is falling due to: the ongoing economic collapse triggered by the insanity of our war in the Ukraine to weaken Russia; our deliberate destruction of the European economy to the same end (echoes of the 1920s when US tariffs on foreign goods, and demands for reparations from Germany crashed the European economy, reducing their ability to purchase goods from the US, resulting in "the great depression", exacerbated by the US treasury's deliberate contraction of the money supply); our tariffs on Chinese goods and the massive inflationary effect this has in the US economy; and our repetition of the mistake of raising interest rates in a failing economy (characterized by collapsing infrastructure, the end of the petro-dollar and ongoing loss of the dollar's reserve status, both due to our continuous use of the dollar as a weapon rather as a financial instrument).
What is actually needed is a much better than basic income, because that will keep people fat and happy and buying stuff that will drive the economy while trying to work out how to pay the rest of the world for goods and services, as we lose the ability to loot it. ========
Prakash Prarthana (2023-02-07). The economist who predicted that A.I. would replace half of all U.S. jobs now says ChatGPT is the equivalent of Uber disrupting the taxi industry—and it could lead to lower wages: If ChatGPT is successful in creating competition like Uber did, it could drive wages down, according to Oxford economist Carl Frey. Fortune Magazine. https://fortune.com/2023/02/07/chatgpt-economist-says-disrupt-job-market-lower-wages-ai/amp/
ChatGPT has been the talk of the town since it was launched in November. While the underlying technology isn’t new, artificial intelligence like this has never been so accessible to the public, and the OpenAI-owned bot has blown minds with its ability to complete a wide range of tasks, from passing business school exams to drafting State of the Union speeches.
Just 10 years ago, a lot of artificial-intelligence-related developments felt like science fiction, and predictions for how they would shape industries were often ominous for workers. In 2013, two Oxford University economists, Carl Benedikt Frey and Michael Osborne, wrote a paper in which they predicted that 47% of U.S. employment was under threat from “computerisation” or automation.
A decade later, automation is here and fast becoming a part of almost every industry. But Frey doesn’t think that we’re at nearly half of human workers being replaced just yet. He does think, however, that ChatGPT could create a lot more competition, which would lower wages.
“I think there’s a risk that ChatGPT makes us a lot more productive in easy-to-do stuff, but the hard part to figure out is how we can use A.I. to create innovation that then creates new occupations and new industries,” Frey told Fortune.
He pointed to various trends that have already been driving wages down, from the computer revolution that impacted middle-income jobs to the steadily falling income of prime-age men. And if ChatGPT is successful in creating more competition, that might lead to a continuation in a downward-trending trajectory, he said.
In a previous interview with Insider, Frey likened ChatGPT to Uber disrupting the taxi market, increasing the demand for drivers and lowering wages by about 10%.
“Uber didn’t reduce the demand for taxi drivers,” Frey told Insider. “It, if anything, increased the number of people driving cars for a living, but it reduced the amount [and] the earnings capacity of incumbent drivers.”
The existence of ChatGPT itself has already heated up competition among big tech companies for the burgeoning A.I. market, forcing Google to pull up its socks for a new tech race. On Monday, the company announced its very own chatbot, called Bard. And Microsoft poured $10 billion into ChatGPT’s parent company last month.
“Just the fact that it’s available to almost everyone is a huge step change. How much this is actually a sort of a step change in terms of innovation, I think, is debatable,” Frey said.
But Frey remains optimistic about A.I. innovations and what it could mean for technological progress. “What we ideally want is technologies that create new types of jobs, new types of industries, new demands of labor,” he said. |
|
7 |
General / Science & Technology / AI doing as intended instills glimmer of awareness of the uselessness of an MBA |
on: 2023-01-23 15:50:43 |
Started by Hermit | Last post by Hermit |
Mollman Steve (2023-01-21). ChatGPT passed a Wharton MBA exam and it’s still in its infancy. One professor is sounding the alarm “Any automation of the skills taught in our MBA programs could potentially reduce the value of an MBA education.” Fortune Magazine. https://fortune.com/2023/01/21/chatgpt-passed-wharton-mba-exam-one-professor-is-sounding-alarm-artificial-intelligence/amp/
ChatGPT has alarmed high school teachers, who worry that students will use it—or other new artificial intelligence tools—to cheat on writing assignments. But the concern doesn’t stop at the high school level. At the University of Pennsylvania’s prestigious Wharton School of Business, professor Christian Terwiesch has been wondering what such A.I. tools mean for MBA programs.
This week, Terwiesch released a research paper in which he documented how ChatGPT performed on the final exam of a typical MBA core course, Operations Management.
The A.I. chatbot, he wrote, “does an amazing job at basic operations management and process analysis questions including those that are based on case studies.”
It did have shortcomings, he noted, including being able to handle “more advanced process analysis questions.”
But ChatGPT, he determined, “would have received a B to B- grade on the exam.”
Elsewhere, it has also “performed well in the preparation of legal documents and some believe that the next generation of this technology might even be able to pass the bar exam,” he noted.
“ChatGPT is not going away”
Of course, ChatGPT is “just in its infancy,” as billionaire entrepreneur Mark Cuban noted this week in an interview with Not a Bot, an A.I. newsletter. He added, “Imagine what GPT 10 is going to look like.”
Andrew Karolyi, dean of Cornell University’s SC Johnson College of Business, agrees, telling the Financial Times this week: “One thing we all know for sure is that ChatGPT is not going away. If anything, these A.I. techniques will continue to get better and better. Faculty and university administrators need to invest to educate themselves.”
That’s especially true with software giant Microsoft mulling a $10 billion investment in OpenAI, the venture behind ChatGPT, after an initial $1 billion investment a few years ago. And Google parent Alphabet is responding by plowing resources into similar tools to answer the challenge, which it fears could hurt its search dominance.
So people, including MBA students, will be using these tools, like it or not.
“I’m of the mind that A.I. isn’t going to replace people, but people who use A.I. are going to replace people,” Kara McWilliams, head of ETS Product Innovation Labs, which offers a tool that can identify A.I.-generated answers, told the Times.
Terwiesch, in introducing his paper, noted the effect that electronic calculators had on the corporate world—and suggested that something similar could happen with tools like ChatGPT.
“Prior to the introduction of calculators and other computing devices, many firms employed hundreds of employees whose task it was to manually perform mathematical operations such as multiplications or matrix inversions,” he wrote. “Obviously, such tasks are now automated, and the value of the associated skills has dramatically decreased. In the same way any automation of the skills taught in our MBA programs could potentially reduce the value of an MBA education.”
[Hermit: Based on what Terwiesch said, you may safely conclude that there is so little value in an MBA that Chatbots could already replace MBAs and corporate competance would be greatly improved.] |
|
Return
to the board index. |