« Reply #45 on: 2011-02-15 19:50:34 »
I'm not sure what to say. Are we closer as a result ?
IBM computer, Jeopardy! champ tied after first day Source: Canada.com Author: Chris Lefkow, Agence France-Presse Date: 2011.02.15
Photograph by: Ben Hider, Getty Images
WASHINGTON – An IBM computer displayed a few quirks but played to a draw on the opening day of a man vs. machine showdown with two human champions of the popular US television game show Jeopardy!.
"Watson," a supercomputer named after the founder of the US technology giant Thomas Watson, and human contestant Brad Rutter each had $5,000 after the first day of the three-day match.
The other human player, Ken Jennings, was trailing the pair with $2,000.
Watson, represented on stage by a large computer monitor, was frequently quicker to the buzzer than Rutter and Jennings, correctly answering questions in its artificial voice.
Jeopardy!, which first aired on US television in 1964, tests a player's knowledge in a range of categories, from geography to politics to history to sports and entertainment.
A dollar amount is attached to each question and the player with the most money at the end of the game is the winner. Players have money deducted for wrong answers.
In a twist on traditional game play, contestants are provided with clues and need to supply the questions.
Watson receives the clues electronically by text message at the same time as they are revealed to the human contestants. The first player to hit the buzzer gets to answer the question.
Watson showed an impressive grasp of the Beatles songbook.
"What is Maxwell's silver hammer?" replied Watson to the clue "Bang, bang, his silver hammer came down upon her head," a reference to the Beatles song.
"What is Eleanor Rigby?" Watson answered correctly to the clue "She died in the church and was buried along with her name, nobody came."
Watson at one point built up a commanding lead with $4,000 to $200 each for Rutter and Jennings.
But the machine then began to slip up, oddly repeating a wrong answer to a question Jennings had already answered incorrectly.
Jennings wrongly identified the 1920s as the decade during which the crossword puzzle and the Oreo cookie were introduced.
Given its chance, Watson also said in the 1920s.
"No, Ken said that," Jeopardy! host Alex Trebek admonished Watson.
Rutter then answered correctly -- the 1910s.
On another question, about a one-legged US Olympic champion, the clue was "It was the anatomical oddity of US gymnast George Eyser who won a gold medal on the parallel bars in 1904."
Watson replied "What is a leg?" instead of "What is missing a leg?"
"Watson's very bright, very fast but he has some weird little moments once in a while," Trebek said.
Watson, which is not connected to the Internet, plays the game by crunching through multiple algorithms at dizzying speed and attaching a percentage score to what it believes is the correct response.
Watson, which has been under development at IBM Research labs in New York since 2006, is the latest machine developed by IBM to challenge mankind -- in 1997, an IBM computer named "Deep Blue" defeated world chess champion Garry Kasparov in a six-game match.
Developing a supercomputer that can compete with the best human Jeopardy! players, however, involves challenges more complex than those faced by the scientists behind "Deep Blue," according to IBM researchers.
Watson uses what IBM calls Question Answering technology to tackle Jeopardy! clues, gathering evidence, analyzing it and then scoring and ranking the most likely answer.
"You are about to witness what may prove to be an historic competition -- an exhibition match pitting an IBM computer system against the two most celebrated and successful players in Jeopardy! history," Trebek said to kick off the show.
Jennings holds the Jeopardy! record of 74 straight wins while Rutter won a record $3.25 million on the show.
The winner of the Jeopardy! showdown is to receive $1 million. Second place is worth $300,000 and the third place finisher pockets $200,000.
IBM plans to donate 100 percent of its winnings to charity. Jennings and Rutter plan to give 50 percent of their prize money to charity.
After three hard-fought nights of trivia play, Jeopardy!‘s IBM Challenge has come to an end. It seemed to be almost a certainty at the end of round two that thinking computer Watson would wipe the floor with human competitors Ken Jennings and Brad Rutter, and it did just that last night.
The bloodbath really happened on Tuesday, the end of the two-day first round, when Watson emerged as the victor with more than double the winnings of its fleshsack competitors. Watson had $23,440 banked going into Final Jeopardy last night, a short distance from second-place contender Ken Jennings’ $18,200. Rutter meanwhile was left in in the dust, only grabbing $5,600 for himself.
In the end, Jennings accepted defeat with a smile and an ominous quip, bidding just $1,000 on the final question — assuring a win over former Million Dollar Challenge competitor Rutter –and writing below the bid, “I for one welcome our new computer overlords.” Ken, man… don’t encourage it!
Brain boffins at University College London have made a major breakthrough in the ongoing effort to bridge the gap between man and machine.
The UCL research team has developed a technique for mapping both the connections and functions of nerve cells in the brain, as revealed by UCL News.
"We are beginning to untangle the complexity of the brain," reads a statement from UCL research fellow Tom Mrsic-Flogel. "Once we understand the function and connectivity of nerve cells spanning different layers of the brain, we can begin to develop a computer simulation of how this remarkable organ works."
The team, led by Mrsic-Flogel, managed to map part of the visual cortex of "anaesthetized C57BL/6 mice between postnatal day 22 and 26," according to the research paper published this week in the journal Nature.
In doing so, they were not only able to determine "millions of different connections" (synapses) of "thousands of neurons" (brain cells), but also to detect which neurons worked together in response to different visual stimuli, and what paths their connections took.
In a nutshell, the team's experimental method was to use "two-photon microscopy" to determine neuronal functions in a subsection of the visual cortex of live mice, and then take a slice of that same subsection and stimulate it "in vitro" – g'bye, mousie – to determine how the individual neurons connected with one another.
By doing so, they were able to determine for the first time that neurons with similar visual functions worked together synaptically more frequently than did neurons which responded to different visual stimuli.
Or, as the paper – "Functional specificity of local synaptic connections in neocortical networks" – summarized their findings in boffin-speak:
Neurons responding similarly to naturalistic stimuli formed connections at much higher rates than those with uncorrelated responses. Bidirectional synaptic connections were found more frequently between neuronal pairs with strongly correlated visual responses. Our results reveal the degree of functional specificity of local synaptic connections in the visual cortex, and point to the existence of fine-scale subnetworks dedicated to processing related sensory information.
UCL's work is part of an emerging field called "connectomics", which seeks to map the brain's synapses. With parallels to genomics, which maps an organism's genetic makeup, connectomics aims to map how information flows through the brain.
Mrsic-Flogel was quick to admit that although the team's breakthrough was a major one, much more work needs to be done before a computer simulation could be created. After all, our brains contain an estimated one hundred billion neurons, each of which is connected to thousands of its fellows – the total number of these synapses is estimated to be in the range of 150 trillion.
Regarding the creation of the holy grail of a computer model of the human brain, Mrsic-Flogel said "it will take many years of concerted efforts amongst scientists and massive computer processing power before it can be realised."
As they progress toward that goal, however, the researchers hope to gain increasing knowledge of "how perceptions, sensations and thoughts are generated in the brain and how these functions go wrong in diseases such as Alzheimer's disease, schizophrenia and stroke." ®
« Reply #50 on: 2012-06-01 12:16:59 »
I'm think'in that I am just not ready yet to commit my mind to the Virtual World only to get disconnected. At least of I get rejected in the 3D world, I can look into her eyes and feel the chill before I start weeping.
LINX 'downed by ethernet loop' on external network
The London Internet Exchange (LINX) suffered an hour-long outage yesterday evening, after an unnamed external network caused an ethernet loop and protective measures failed to work.
LINX is currently trying to diagnose what went wrong. Reports on Twitter suggested that the exchange went titsup after Juniper's PTX Series packet transport switch went live on the system.
"We are confident that this is not related to the outage," the LINX spokesman told us, explaining that the problems had been caused by the aforementioned ethernet loop.
"Linx is trying to determine where the loop originated and we are also addressing why the protection on Juniper's LAN didn't work."
He added: "The exchange is now stable."
A minor issue also affected some members who had turned off the LINX ports to re-route traffic on their networks only to find they couldn't turn them on again due to MAC address learning on the interface being disabled as it had exceeded the maximum limit.
LINX told users struggling to reinstate those ports to simply reset them.
We asked Juniper Networks to comment on this story, but it hadn't immediately got back to us at time of writing. ®
« Reply #51 on: 2012-08-30 01:16:07 »
Piece by piece we're for better or worse getting there.
Harvard boffins build cyborg skin of flesh and nanowires
Source: Author: Iain Thomson Date: 2012.08.30
Man and machine become one
Humanity has taken another step down the path of the Borg with the invention of the first flesh containing a functional nanowire sensor network that's biocompatible with the human body.
"With this technology, for the first time, we can work at the same scale as the unit of biological system without interrupting it," team leader Charles Lieber, the Mark Hyman Jr. Professor of Chemistry at Harvard, told the Harvard Gazette. "Ultimately, this is about merging tissue with electronics in a way that it becomes difficult to determine where the tissue ends and the electronics begin."
A team from Harvard University has built the cyborg flesh by laying out a 3D grid of nanowires and small sensors and then growing tissue around them. The group has got as far as building blood vessels with the technique that can measure pH levels, as well as a chunk of rat neurons, heart cells, and muscle that can be used to test drugs.
"Previous efforts to create bioengineered sensing networks have focused on two-dimensional layouts, where culture cells grow on top of electronic components, or on conformal layouts, where probes are placed on tissue surfaces,” said team member Bozhi Tian.
"It is desirable to have an accurate picture of cellular behavior within the 3-D structure of a tissue, and it is also important to have nanoscale probes to avoid disruption of either cellular or tissue architecture."
The next step is to build these nanowire systems so that they can not only monitor, but also control the tissue they are built into. The field has possibilities for building prosthetic devices that can link up with human nerve tissue, or even organs that could be implanted and controlled by the recipient.
It might also lead to the creation of an army of cyborg warriors to destroy mankind, although the researchers left that last part out. El Reg suspects that Reading University's Captain Cyborg is already frothing at the mouth with eagerness.
« Reply #52 on: 2013-05-27 11:29:53 »
Published on Dec 28, 2012
Professor Raffaello D'Andrea from ETH Zurich at ZURICH.MINDS presents: "Feedback Control and the Coming Machine Revolution" -- an amazing display of the future capabilities of machines using flying robots (drones). Institute for Dynamic Systems and Control, Zurichminds 2012, curated by Rolf Dobelli.
"Feedback Control and the Coming Machine Revolution"
« Reply #53 on: 2013-08-05 19:06:14 »
SUMMARY: Researchers have simulated 1 second of real brain activity, on a network equivalent to 1 percent of an actual brain’s neural network, using the world’s fourth-fastest supercomputer. The results aren’t revolutionary just yet, but they do hint at what will be possible as computing power increases.
Neuroscientists have described the findings as astounding and fascinating.
The human brain is one of the most complicated structures in the universe.
Scientists at Institute of Molecular Biotechnology of the Austrian Academy of Sciences have now reproduced some of the earliest stages of the organ's development in the laboratory. Brain bath
They used either embryonic stem cells or adult skin cells to produce the part of an embryo that develops into the brain and spinal cord - the neuroectoderm.
This was placed in tiny droplets of gel to give a scaffold for the tissue to grow and was placed into a spinning bioreactor, a nutrient bath that supplies nutrients and oxygen.
The cells were able to grow and organise themselves into separate regions of the brain, such as the cerebral cortex, the retina, and, rarely, an early hippocampus, which would be heavily involved in memory in a fully developed adult brain.
The researchers are confident that this closely, but far from perfectly, matches brain development in a foetus until the nine week stage.
The tissues reached their maximum size, about 4mm (0.1in), after two months.
The "mini-brains" have survived for nearly a year, but did not grow any larger. There is no blood supply, just brain tissue, so nutrients and oxygen cannot penetrate into the middle of the brain-like structure.
One of the researchers, Dr Juergen Knoblich, said: "What our organoids are good for is to model development of the brain and to study anything that causes a defect in development.
"Ultimately we would like to move towards more common disorders like schizophrenia or autism. They typically manifest themselves only in adults, but it has been shown that the underlying defects occur during the development of the brain."
The technique could also be used to replace mice and rats in drug research as new treatments could be tested on actual brain tissue. 'Mindboggling'
Researchers have been able to produce brain cells in the laboratory before, but this is the closest any group has come to building a human brain.
The breakthrough has excited the field.
Prof Paul Matthews, from Imperial College London, told the BBC: "I think it's just mindboggling. The idea that we can take a cell from a skin and turn it into, even though it's only the size of a pea, is starting to look like a brain and starting to show some of the behaviours of a tiny brain, I think is just extraordinary.
"Now it's not thinking, it's not communicating between the areas in the way our brains do, but it gives us a real start and this is going to be the kind of tool that helps us understand many of the major developmental brain disorders."
The team has already used the breakthrough to investigate a disease called microcephaly. People with the disease develop much smaller brains.
By creating a "mini-brain" from skin cells of a patient with this condition, the team were able to study how development changed.
They showed that the cells were too keen to become neurons by specialising too early. It meant the cells in the early brain did not bulk up to a high enough number before specialising, which affected the final size of even the pea-sized "mini-brains".
The team in Vienna do not believe there are any ethical issues at this stage, but Dr Knoblich said he did not want to see much larger brains being developed as that would be "undesirable".
Dr Zameel Cader, a consultant neurologist at the John Radcliffe Hospital in Oxford, said he did not see ethical issues arising from the research so far.
He told the BBC: "It's a long way from conscience or awareness or responding to the outside world. There's always the spectre of what the future might hold, but this is primitive territory."
Dr Martin Coath, from the cognition institute at Plymouth University, said: "Any technique that gives us 'something like a brain' that we can modify, work on, and watch as it develops, just has to be exciting.
"If the authors are right - that their 'brain in a bottle' develops in ways that mimic human brain development - then the potential for studying developmental diseases is clear. But the applicability to other types of disease is not so clear - but it has potential.
"Testing drugs is, also, much more problematic. Most drugs that affect the brain act on things like mood, perception, control of your body, pain, and a whole bunch of other things. This brain-like-tissue has no trouble with any of these things yet."
« Reply #55 on: 2013-10-01 00:05:29 »
So now will we have to 'believe' the results from computers and 'who guards what' becomes the question.
Quantum computing gets recursive
Source: The register Author: Richard Chirgwin Date: 2013.10.01
Quis custodiet ipsos quanta?
When a quantum computer can produce results that would take thousands of years to produce out of a classical computer, an obvious question arises: if you've given the wrong answer, how would you know? That's a question to which University of Vienna boffins have turned their attention to.
A computation involving a handful of qubits can be checked by a classical computer, because it can iterate through the possible states one-by-one. Some other quantum computations are also checkable in the classical world: for example, if we produce a quantum computer with enough power to factor very long cryptographic keys, the result would be testable against the original message.
However, quantum computing boffins assure us that just 300 qubits should represent more possible states than there are atoms in the visible universe, making the job of delivering “provably correct” results a challenge. Some proposals to overcome this go so far as to create entanglements between entire quantum computers, something which reaches far beyond any current technology.
The University of Vienna's Philip Walther, Stefanie Barz and their collaborators, proposed a scheme called “blind quantum computing” in a paper in Nature, and now, the same group says it has demonstrated the technique at a small scale.
The basic idea is simple: the calculation includes traps, intermediate steps in a calculation for which the “classical” answer can be known in advance.
Meanwhile, the quantum computer actually carrying out the calculation has no idea what it's doing. As explained at Science Magazine: “A quantum computer receives qubits and completes a task with them, but it remains blind to what the input and output were, and even what computation it performed … The test is designed in such a way that the quantum computer cannot distinguish the trap from its normal tasks”.
The trap is designed show an error while the quantum computer is working.
The trick is that the researchers didn't embed their traps into a classical calculation. Rather, they used a four-qubit quantum computer as the verifier, to perform a “blind Bell test” against a second quantum computer. In their Nature paper, they claim the experiment “is independent of the experimental quantum-computation platform used”.
It's just an experiment a this stage. As Scott Aaronson of MIT told Science, “this currently has the status of a fun demonstration proof of concept, rather than anything that's directly useful yet”, but such demonstrations are “necessary steps” towards useful quantum computers.
The University of Vienna is quite capable at producing “fun demonstrations” of quantum computing. Earlier this year, it produced a real-time visualisation of the emergence of entanglement.
You'd think guesswork and advanced science would be natural enemies, but not at Google where a crack team of researchers are trying to mate the two together.
In a paper presented on Monday at an artificial-intelligence conference in California, seven Google researchers outlined their image classifier, software that labels pictures by identifying what's in them. It was created by fusing two distinct machine-learning approaches together.
In short, the system can make an educated guess at identifying an unfamiliar picture based on the text labels offered to it. For example, if it was shown a photo of a black Victorian top hat it hadn't seen before, and asked if it was a black Victorian top hat or a black pedal-opened wastepaper bin – both labels it also hadn't heard of before – it could guess correctly because it knows what various other hats and garbage bins look like and knows the relationships between their labels.
The DeViSE: A Deep Visual-Semantic Embedding Model paper [PDF] describes a tech that strives to combine the eerie image recognition capabilities of Google's traditional weak-AI systems with the broad semantic modeling capabilities of its "Skip-gram" text classifiers.
This approach is called "zero-shot learning", and is seen by the Google brain trust (which includes MapReduce-creator Jeff Dean) as one of the best chances of designing systems that can deal with changeable datasets with poor classifications – in other words, the info Google's growing fleet of handheld or wheel-bound electronic eyes are likely to slurp up from the world around them.
"The goals of this work are to develop a vision model that makes semantically relevant predictions even when it makes errors and generalizes to classes outside of its labeled training set," they write.
DeViSE contains two elements: a text classifier that labels text based on its contents, and an object recognizer that studies images.
The text classifier trains a neural language model using 5.7 million documents comprising 5.4 billion words slurped from Wikipedia. The approach lets the tech convert the fuzzy world of language into a numeric graph in which each word is defined by its relationships with others.
The image recognizer, meanwhile, is a "state-of-the-art deep neural network for visual object recognition" that was trained to recognize some 1,000 categories of images.
Armed with these two power technologies, the researchers figured out a way to fuse the two together so that the model could use both approaches when attempting to classify a new image.
This model is marginally more accurate than today's state-of-the-art systems and is inherently more flexible. The researchers hypothesized:
A DeViSE model that was trained on images with labels like "tiger shark", "bull shark", and "blue shark", but never with images labeled simply "shark", would likely have the ability to generalize to this more coarse-grained descriptor because the language model has learned a representation of the general concept of "shark" which is similar to all of the specific sharks. Similarly, if tested on images of highly specific classes which the model happens to have never seen before, for example a photo of an oceanic whitecap shark, and asked whether the correct label is more likely "oceanic whitecap shark" or some other unfamiliar label (say, "nuclear submarine"), our model stands a fighting chance of guessing correctly because the language model ensures that representation of "oceanic whitecap shark" is closer to the representation of sharks the model has seen, while the representation of "nuclear submarine" is closer to those of other sea vessels.
Subsequent experiments detailed in the paper bore out this theory.
Google believes the system has a broad range of applications in some of the search giant's trickiest problem areas.
"We believe that our model's unusual compatibility with larger, less manicured data sets will prove to be a major strength moving forward," the nine wrote. "Though here we trained on a curated academic image dataset, our model's architecture naturally lends itself to being trained on all available images that can be annotated with any text term contained in the (larger) vocabulary. We believe that training massive "open" image datasets of this form will dramatically improve the quality of visual object categorization systems."
And once Google has honed the capabilities of this tech further, it could be used for a multitude of problems, such as distinguishing between categories like dogs, cats, and lawnmowers, and also specific entities, like telling the difference between cars such as a "Honda Civic, Ferrari F355, Tesla Model-S" they note – capabilities that are crucial ingredients for further developments in Google's key business of highly targeted, automated advertising.
As oil is to the plastics industry, data is to Google: it is the fundamental resource on which the company depends, and the more it can refine it, the more money it can make from it. For this reason machine learning and other deep analytical approaches are a priority for Google as the ad-slinger attempts to automate the classification and tagging of an ever-swelling world of digital data, with this system it has devised another approach to let it slurp more cash from the ethereal digital world.
« Reply #57 on: 2015-03-24 23:27:07 »
The pieces are being slowly but surely noodled together. It is just amazing to me.
Artificial hand able to respond sensitively thanks to muscles made from smart metal wires
Source: ScienceDaily Author: University Saarland Date: 2015.03.24
-Filomena Simone, an engineer in the research team led by Professor Stefan Seelecke, is working on the prototype of the artificial hand.- Credit: Oliver Dietze
Engineers at Saarland University have taken a leaf out of nature's book by equipping an artificial hand with muscles made from shape-memory wire. The new technology enables the fabrication of flexible and lightweight robot hands for industrial applications and novel prosthetic devices. The muscle fibres are composed of bundles of ultrafine nickel-titanium alloy wires that are able to tense and flex. The material itself has sensory properties allowing the artificial hand to perform extremely precise movements. The research group led by Professor Stefan Seelecke will be showcasing their prototype artificial hand and how it makes use of shape-memory 'metal muscles' at HANNOVER MESSE -- the world's largest industrial fair -- from April 13th to April 17th.
The hand is the perfect tool. Developed over millions of years, its 'design' can certainly be said to be mature. The hand is extraordinarily mobile and adaptable, and the consummate interaction between the muscles, ligaments, tendons, bones and nerves has long driven a desire to create a flexible tool based upon it. The research team led by Professor Stefan Seelecke from Saarland University and the Center for Mechatronics and Automation Technology (ZeMA) is using a new technology based on the shape memory properties of nickel-titanium alloy. The engineers have provided the artificial hand with muscles that are made up from very fine wires whose diameter is similar to that of a human hair and that can contract and relax.
'Shape-memory alloy (SMA) wires offer significant advantages over other techniques,' says Stefan Seelecke. Up until now, artificial hands, such as those used in industrial production lines, have relied on a lot of complex background technology. As a result they are dependent on other devices and equipment, such as electric motors or pneumatics, they tend to be heavy, relatively inflexible, at times loud, and also expensive. 'In contrast, tools fabricated with artificial muscles from SMA wire can do without additional equipment, making them light, flexible and highly adaptable. They operate silently and are relatively cheap to produce. And these wires have the highest energy density of all known drive mechanisms, which enables them to perform powerful movements in restricted spaces,' explains Seelecke. The term 'shape memory' refers to the fact that the wire is able to 'remember' its shape and to return to that original predetermined shape after it has been deformed. 'This property of nickel-titanium alloy is a result of phase changes that occur within the material. If the wire becomes warm, which happens, for instance, when it conducts electricity, the material transforms its lattice structure causing it to contract like a muscle,' says Seelecke.
The engineers use 'smart' wires to play the role of muscles in the artificial hand. Multiple strands of shape-memory wire connect the finger joints and act as flexor muscles on the front-side of the finger and as extensor muscles on the rear. In order to facilitate rapid movements, the engineers copied the structure of natural human muscles by grouping the very fine wires into bundles to mimic muscle fibres. These bundles of wires are as fine as a thread of cotton, but have the tensile strength of a thick wire. 'The bundle can rapidly contract and relax while exerting a high tensile force,' explains Filomena Simone, an engineer who is working on the prototype of the artificial hand as part of her doctoral research. 'The reason for this behaviour is the rapid cooling that is possible because lots of individual wires present a greater surface area through which heat can be dissipated. Unlike a single thick wire, a bundle of very fine wires can undergo rapid contractions and extensions equivalent to those observed in human muscles. As a result, we are able to achieve fast and smooth finger movements,' she explains.
Another effect of using the shape-memory metal wires is that the hand can respond in a natural manner when someone intervenes while a particular movement is being carried out. This means that humans can literally work hand-in-hand with the prototype device. A semiconductor chip controls the relative motions of the SMA wires allowing precise movements to be carried out. And the system does not need sensors. 'The material from which wires are made has sensor properties. The controller unit is able to interpret electric resistance measurement data so that it knows the exact position of the wires at any one time,' says Seelecke. This enables the hand and the fingers to be moved with high precision. The research team will be exhibiting their system prototypes at HANNOVER MESSE 2015 and showcasing the potential of the technology by performing hand grasps and the controlled movement of individual fingers. The researchers want to continue developing the prototype and improve the way in which it simulates the human hand. This will involve modelling hand movement patterns and exploiting the sensor properties of SMA wire.
The team, who will be exhibiting at the Saarland Research and Innovation Stand in Hall 2, Stand B 46, are looking for development partners.
« Reply #58 on: 2015-05-01 14:47:34 »
Nod to Blunderov for this story: Looks like we need to do a rewrite on GENESIS .... In the beginning life muddle along randomly mixing chemicals to create billions of forms of life. Then Man invented computers, sequenced DNA and all hell broke loose ... "Yeah Doc we want the kid to have a large 'member' and come with a hi-speed quantum computer interface, and we want the 'toggle gene' in place we can flip, when we get old so he will want to look after us."
The singularity Cometh.
Craig Venter: On the verge of creating synthetic life
For the first time, scientists have created a synthetic DNA that survived and replicated after being injected into a living organism, an advance that could hold promises for creating new antibiotics and other drugs.
“Life on Earth in all its diversity is encoded by only two pairs of DNA bases ... and what we've made is an organism that stably contains those two plus a third, unnatural pair of bases,” said Associate Professor Floyd E. Romesberg, of The Scripps Research Institute, who led the research team.
“This shows that other solutions to storing information are possible and, of course, takes us closer to an expanded-DNA biology that will have many exciting applications — from new medicines to new kinds of nanotechnology,” Romesburg said. The findings were first announced in the scientific journal Nature on Wednesday.
The booming field of synthetic biology has raised concerns scientists are in some way "playing God" by creating living things that could escape from labs into the outside world where they have no natural predators and nothing to check their spread.
In the current experiment, the scientists took pains to make that impossible, according to their paper. The new bases are not found in the natural environment, Romesberg and his colleagues said, so even if organisms with manmade DNA were to escape from the lab they could not survive, let alone infect other organisms.
Until now, biologists who synthesize DNA in the lab have used the same molecules — called bases — that are found in nature. But Romesberg and colleagues not only created two new bases, but also inserted them into a single-cell organism and found that the invented bases replicate like natural DNA, though more slowly.
The scientists reported that they got the organisms, the common bacteria E. coli, to replicate about 24 times over the course of 15 hours.
In nature, DNA's bases, designated A, T, C, and G, pair up. A pairs with T and C with G, forming what looks like steps in a winding staircase — the double helix shape that is the DNA molecule. Bases determine what amino acids a particular strand of DNA codes for, and therefore what proteins (long strings of amino acids) are produced.
So far, the synthetic bases, which Romesberg's team call X and Y, do not code for any amino acids, the scientists reported. But in principle they — or other manmade bases — could. Much as adding a 27th and 28th letter to the English alphabet would allow more words to be created, so adding X and Y to the natural DNA bases would allow new amino acids and proteins to be created.
It is unknown at this early stage whether the new proteins would be gibberish or meaningful. Believing that they will be useful, Romesberg co-founded a biotechnology company named Synthorx, which was officially launched on Wednesday.
Based in San Diego, California, it will focus on using synthetic biology "to improve the discovery and development of new medicines, diagnostics and vaccines," the company said in a statement. Synthorx has the exclusive rights to the synthetic DNA advance.
"In principle, we could encode new proteins made from new, unnatural amino acids — which would give us greater power than ever to tailor protein therapeutics and diagnostics and laboratory reagents to have desired functions," Romesberg said.
"Other applications, such as nanomaterials, are also possible."