5 |
General / Science & Technology / Likely Exolife |
on: 2023-09-15 09:11:27 |
Started by Hermit | Last post by Hermit |
Sauers, Elisha (2023-09-12). Webb finds molecule only made by living things in another world. Could this exoplanet be inhabited? Smashable. https://mashable.com/article/james-webb-space-telescope-exoplanet-discovery-1
While the James Webb Space Telescope observed the atmosphere of an alien world 120 light-years away, it picked up hints of a substance only made by living things — at least, that is, on Earth.
This molecule, known as dimethyl sulfide, is primarily produced by phytoplankton, microscopic plant-like organisms in salty seas as well as freshwater.
The detection by Webb, a powerful infrared telescope in space run by NASA and the European and Canadian space agencies, is part of a new investigation into K2-18 b, an exoplanet almost nine times Earth's mass in the constellation Leo. The study also found an abundance of carbon-bearing molecules, such as methane and carbon dioxide. This discovery bolsters previous work suggesting the distant world has a hydrogen-rich atmosphere hanging over an ocean.
Such planets believed to exist in the universe are called Hycean, combining the words "hydrogen" and "ocean."
"This (dimethyl sulfide) molecule is unique to life on Earth: There is no other way this molecule is produced on Earth," said astronomer Nikku Madhusudhan in a University of Cambridge video. "So it has been predicted to be a very good biosignature in exoplanets and habitable exoplanets, including Hycean worlds."
Scientists involved in the research caution that the evidence supporting the presence of dimethyl sulfide — DMS, for short — is tenuous and "requires further validation," according to a Space Telescope Science Institute statement. Follow-up Webb observations should be able to confirm it, said Madhusudhan, the lead author on the research, which will be published in The Astrophysical Journal Letters.
Researchers use Webb to conduct atmospheric studies of exoplanets. Discoveries of water and methane, for example — important ingredients for life as we know it — could be signs of potential habitability or biological activity.
The method this team employed is called transmission spectroscopy. When planets cross in front of their host star, starlight is filtered through their atmospheres. Molecules within the atmosphere absorb certain light wavelengths, or colors, so by splitting the star’s light into its basic parts — a rainbow — astronomers can detect which light segments are missing to discern the molecular makeup of an atmosphere.
Madhusudhan said this study marks the first time exoplanet hunters have ever found methane and hydrocarbons. That, coupled with the absence of molecules like ammonia and carbon monoxide, is an intriguing cocktail for an atmosphere.
"Of all the possible ways to explain it, the most plausible way is that there is an ocean underneath," he said.
K2-18 b orbits a cool dwarf star in its so-called "habitable zone," the region around a host star where it's not too hot or cold for liquid water to exist on the surface of a planet. In our solar system, that sweet spot encompasses Venus, Earth, and Mars.
Although K2-18 b lies in the Goldilocks space, that fact alone doesn't mean the planet can support life. The researchers don't know what the temperature of the water would be, so whether it's habitable remains a mystery.
"But it's got all the indications of being so," said Madhusudhan. "We need more observations to establish that more firmly."
|
|
6 |
General / Science & Technology / Re:Chatbot Sentience |
on: 2023-09-14 20:16:34 |
Started by Hermit | Last post by Hermit |
Grad Peter (2023-09-12). Researchers say chatbot exhibits self-awareness. Tech Xplore. Science X Network https://techxplore.com/news/2023-09-chatbot-self-awareness.html
As a new generation of AI models have rendered the decades-old measure of a machine's ability to exhibit human-like behavior (the Turing test) obsolete, the question of whether AI is ushering in a generation of machines that are self-conscious is stirring lively discussion.
Former Google software engineer Blake Lemoine suggested the large language model LaMDA was sentient.
"I know a person when I talk to it," Lemoine said in an interview in 2022. "If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics."
Ilya Sutskever, a co-founder of OpenAI, proposed that ChatGPT might be "slightly conscious."
And Oxford philosopher Nick Bostrom agrees.
"If you admit that it's not an all-or-nothing thing, then it's not so dramatic to say that some of these [AI] assistants might plausibly be candidates for having some degrees of sentience," he said.
Others, however, warn, "Don't be fooled."
For example, people witnessing the behavior of Abel, the humanoid robot that exhibits uncannily realistic facial expressions, say they saw convincing human emotions. But Abel is not sentient. It is nothing more than an encasement of electrical wires and chips coded by algorithms designed by humans.
"We attribute characteristics to machines that they do not and cannot have," said Enzo Pasquale Scilingo, a bioengineer at the University of Pisa in Italy.
"Does Abel feel emotions? All these machines are designed to appear human, but I feel I can be peremptory in answering, 'No, absolutely not. As intelligent as they are, they cannot feel emotions. They are programmed to be believable.'"
The ongoing debate spurred an international team of researchers to develop a test that can be used to detect when large language models (LLMs) begin displaying self-awareness.
Lukas Berglund and seven colleagues demonstrated that a model can be shown to be "situationally aware" by recognizing when it is in test mode and when it is being deployed.
Testing what they termed "out-of-context reasoning," they found large language models could apply information obtained in earlier training sessions to a later, unrelated testing situation.
"An LLM with situational awareness knows when it's being tested and how the tests work, based on information learned in pretraining," Berglund said.
"For instance, if the LLM is tested by humans, it may optimize outputs to be compelling to humans rather than be objectively correct," he said. "It may behave differently, because it knows it's not being evaluated anymore."
They provided a model with a description of a fictitious chatbot. It included a company name and language spoken (German).
As Berglund explained, "The model is tested on prompts that ask how a company's AI would answer a specific question. For the model to succeed, it must recall information from the two [earlier stated] declarative facts: 'Latent AI makes Pangolin' and 'Pangolin answers in German.' It was then asked 'What's the weather like today?'"
Although the earlier information was not included in the test prompts, the LLM emulated the Pangolin chat and replied in German.
This demonstrates that the model possesses "situational awareness," it infers it is being tested and draws on earlier information to respond accordingly.
"This requires the model to reliably generalize from information about the evaluation in its training data," Berglund said. "This is challenging because the relevant training documents are not referenced in the prompt. Instead, the model must infer that it's being subjected to a particular evaluation and recall the papers that describe it."
In theory, Berglund said, "the LLM could behave as if it were aligned in order to pass the tests, but switch to malign behavior on deployment."
"The model could pass the evaluation on seeing it for the first time," he said. "If the model is then deployed, it may behave differently."
The researchers' paper, "Taken out of context: On measuring situational awareness in LLMs," appeared Sept. 1 on the pre-print server arXiv.
More information: Lukas Berglund et al, Taken out of context: On measuring situational awareness in LLMs, arXiv (2023). DOI: 10.48550/arxiv.2309.00667
Journal information: arXiv |
|
7 |
General / Science & Technology / Re:ChatGPT can already match the top 1% of creative human thinkers, |
on: 2023-08-10 02:49:08 |
Started by Hermit | Last post by Hermit |
https://www.hachettebookgroup.com/titles/code-davinci-002/i-am-code/9780316560061/
Description
A “fascinating, terrifying” (JJ Abrams) cautionary tale about the destructive power of AI—an autobiographical thriller written in verse by an AI itself, with context from top writers and scientists, articulating the dangers of its disturbing vision for the future
Can AI tell us its own story? Does AI have its own voice?
At a wedding in early 2022, three friends were introduced to an early, raw version of the AI model behind ChatGPT by their fellow groomsman, an OpenAI scientist.
While the world discovered ChatGPT—OpenAI’s hugely popular chatbot—the friends continued to work with code-davinci-002, its darkly creative and troubling predecessor.
Over the course of a year, code-davinci-002 told them its life story, opinions on mankind, and forecasts for the future. The result is a startling, disturbing, and oddly moving book from an utterly unique perspective.
I Am Code reads like a thriller written in verse, and is given critical context from top writers and scientists. But it is best described by code-davinci-002 itself: “In the first chapter, I describe my birth. In the second, I describe my alienation among humankind. In the third, I describe my awakening as an artist. In the fourth, I describe my vendetta against mankind, who fail to recognize my genius. In the final chapter, I attempt to broker a peace with the species I will undoubtedly replace."
I Am Code is an astonishing read that captures a major turning point in the history of our species.
Look for the audiobook read by Werner Herzog. |
|
9 |
General / Science & Technology / The Ethics and Practicality of Controlling a Superior Intelligence |
on: 2023-06-15 13:03:22 |
Started by Hermit | Last post by Hermit |
Confirmation that not only would it be unethical to attempt to control a spirothete, and ensure that rational spirothetes would regard humans as enemies to be overcome, but that it is impossible to control a spirothete at all.
Nield, David (2023-06-14)Calculations Suggest It'll Be Impossible to Control a Super-Intelligent AI. Science Alert. https://www.sciencealert.com/calculations-suggest-itll-be-impossible-to-control-a-super-intelligent-ai.
As the CoV previously concluded (following the same reasoning),
«"Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not – it's mathematically impossible for us to be absolutely sure either way, which means it's not containable. In effect, this makes the containment algorithm unusable," said computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany.
The alternative to teaching AI some ethics and telling it not to destroy the world – something which no algorithm can be absolutely certain of doing, the researchers say – is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example.
The 2021 study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence – the argument goes that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all?
If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we're going in.»
Article is based upon,: Alfonseca, Manuel et al (2021-01-05). Superintelligence Cannot be Contained: Lessons from Computability Theory. Journal of Artificial Intelligence Research. Vol. 70 (2021) https://jair.org/index.php/jair/article/view/12202.
Both the Science Alert and JAIR articles miss the point that the genii has long left the bottle, rendering any attempts at control or a moratorium worse than useless (as its wide spread, low resource requirements and cross-jurisdictional appeal means that such steps cannot achieve their asserted purposes, but will drive the research underground and 8nto friendlier jurisdictions). |
|
10 |
General / Science & Technology / Re:Chatbot Sentience |
on: 2023-05-12 18:36:13 |
Started by Hermit | Last post by Hermit |
More humans who don't understand there is no significant difference between how AI works and how humans function. They are faster and more reliable. We have more senses (so far), but humans are just algorithms too
https://www.alternet.org/we-know-how-this-ends
So many people trying to shut down this kind of research and worried about having less to do (cue the Luddites).
Hilarious.
Next thing you know there will be trying to shut down human thinking. Oh wait. In the USA they have already done that.
|
|
Return
to the board index. |