Most people in the world share 2-4% of DNA with Neanderthals while a few inherited genes from Denisovans, a study confirms.
Denisovan
DNA lives on only in Pacific island dwellers, while Neanderthal genes
are more widespread, researchers report in the journal Science.
Meanwhile, some parts of our genetic code show little trace of our extinct cousins.
They include hundreds of genes involved in brain development and language.
"These
are big, truly interesting regions," said co-researcher Dr Joshua Akey,
an expert on human evolutionary genetics from the University of
Washington Medicine, US.
"It will be a long, hard slog to fully
understand the genetic differences between humans, Denisovans and
Neanderthals in these regions and the traits they influence."
Siberia cave
Studies
of nuclear DNA (the instructions to build a human) are particularly
useful in the case of Denisovans, which are largely missing from the
fossil record.
The prehistoric species was discovered less than a
decade ago through genetic analysis of a finger bone unearthed in a cave
in northern Siberia.
Substantial
amounts of Denisovan DNA have been detected in the genomes of only a
handful of modern-day human populations so far. DNA of girl from Denisova cave gives up genetic secrets - BBC
"The
genes that we found of Denisovans are only in this one part of the
world [Oceania] that's very far away from that Siberian cave," Dr Akey
told BBC News.
Where the ancestors of modern humans might have had physical contact with Denisovans is a matter of debate, he added.
Denisovans
may have encountered early humans somewhere in South East Asia and,
eventually, some of their descendants arrived on the islands north of
Australia.
Meanwhile, humans repeatedly ran into Neanderthals as they spread across Eurasia.
"We
still carry a little bit of their DNA today," said Dr Akey. "Even
though these groups are extinct their DNA lives on in modern humans."
Genetic ancestry
The
research was carried out by several scientists, including Svante Paabo
of the Department of Evolutionary Genetics at the Max-Planck-Institute
for Evolutionary Anthropology.
They found that all non-African populations inherited about 1.5-4% of their genomes from Neanderthals.
However,
Melanesians were the only population that also had significant
Denisovan genetic ancestry, representing between 1.9% and 3.4% of their
genome.
"I think that people (and Neanderthals and Denisovans)
liked to wander," said Benjamin Vernot of the University of Washington,
who led the project.
"And yes, studies like this can help us track where they wandered."
Less than a year ago, when a group of leading
researchers was calling for a moratorium on the use of a revolutionary
technology, Chinese researchers shocked the world
by using it to genetically modify human embryos. The worry was that
unfettered access to the technology might enable such embryos to become
fully grown humans, who will then pass on mutations to all their
offspring. The risk of unintended consequences seemed too great.
Now a different group of Chinese researchers have
again wielded the technology to genetically modify human embryos. This
time, however, the reaction from some scientists is just an annoyed
shrug. Clearly a lot happened in the last year for perceptions to change so drastically.
The technology in question is called CRISPR, and
it allows researchers to make genetic modifications with greater
precision than ever before. In 2015, Chinese researchers used CRISPR to
target genes responsible for a blood disorder called β-thalassaemia.
They were only able to replace the defective gene in 28 out 71 embryos. Worse still, it left a slew of unintended changes in other parts of the genome.
In the latest attempt, researchers at Guangzhou
Medical University have gone a step ahead. Instead of trying to correct
mutations that could cause disease, they used CRISPR technology to
insert a genetic mutation which might offer resistance against HIV.
The mutation was targeted in the CCR5 gene, which is responsible for producing a protein that HIV uses to latch on, enter, and infect a human immune cell. If the CCR5
gene were mutated, the logic goes, the HIV virus would not be able to
infect—and thus the mutation would confer resistance to the disease.
The researchers report in the Journal of Assisted Reproduction and Genetics
that they were successfully inserted the mutated gene in four out of 26
embryos. And, even in the successful cases, not all copies of the CCR5 gene were modified. In other cases mutations were caused that weren’t intended.
The experiment had been approved by a local ethics committee, which ensured that the study followed Chinese government guidelines.
All the experimental human embryos were “non-viable,” which means they
would have been unable to become fully grown humans. Such abnormal
embryos are an inevitable part of in-vitro fertilization therapy, where
sometimes two sperms insert their DNA into a single egg.
“The results are both comforting and disturbing,”
says Peter Donovan of the University of California at Irvine. “The good
news is that the technique worked for this group in the same way that
it did for the first group… an important part of the scientific process
showing it wasn’t a fluke the first time. The salutary lesson is that
there is still much to be learned about gene editing in human embryos
before it is ready for prime time.”
The rate of failure has made some bioethicists and scientists question the motives
of Chinese researchers who continue to test CRISPR in human embryos.
They argue that, while CRISPR offers greater precision, it still isn’t
ready for testing in human embryos. Others, like Donovan, maintain that
it will be studies using donated human embryos that will give us the
most understanding.
Despite the divided opinion, there is definitely a
change in perception. The first study reporting the genetic
modification of human embryos resulted in a summit
held in November between the science academies of China, the US and the
UK. After days of deliberation, the world’s leading geneticists agreed
that, while no CRISPR-modified embryos should become full human beings,
research using human embryos can continue.
The Chinese group that did the latest work
insists that their “proof-of-concept” may provide solutions to improving
human health. They write, “Despite the significant scientific and
ethical issues involved, however, we believe that it is necessary to
keep developing and improving the technologies for precise genetic
modifications in humans.
http://qz.com/
2015 was the year it became OK to genetically engineer babies
In April of this year, researchers in China published the results of an experiment in modifying the DNA of human embryos.
Though the embryos they worked on were damaged ones that could not have
grown into living babies, it sent a tremor through the scientific
establishment.
Just a month earlier, a group of leading geneticists had called for a moratorium
on gene-editing in embryos. Since any genetic changes would get passed
on to future generations, they argued, the risk of unintended
consequences was too great. And in November, at a summit in Washington
DC, scientists from across the world agreed that, while research should continue, it’s still too risky to allow any altered embryos to grow into full human beings.
And, yet, when historians of science look back
decades from now, they may well mark 2015 as the year genetically
engineering humans became acceptable. That’s because, while the world
was paying attention to the gene-editing summit, a more momentous
decision had been made just a month earlier in the UK. There, a
governmental body got ready to hand out licenses for creating a
particular kind of genetically engineered human—using a technique the US
tried and then banned 13 years ago.
This technique won’t create the fabled “designer
babies” just yet. But the changes made to an embryo will be hereditary,
and thus alter the genetic makeup of all the offspring to follow. The
story of how we got here, and what will come next, is why 2015 will be
remembered as a turning-point.
Just getting better
Our ability to do some form of genetic engineering goes back 40,000 years. Selective breeding created a tamer and more likable version of a wolf, the common ancestor of all today’s dogs.
Our desire to design better humans is also old. In Plato’s Republic, Socrates calls for a state-run program to get the best citizens to mate so that the population could be improved.
By the 19th century, the ideology of eugenics—a word not invented by Plato, but coined much later
from the Greek for “good breeding”—had taken such a hold that countries
were passing laws for such programs. Before World War II, 30 states in
the US had passed some form of eugenics laws that mandated sexual sterilization of those deemed unfit (typically the mentally ill).
Only after the horror of Hitler’s genocide did
the world recoil from eugenics. Most geneticists never returned to the
idea that biological intervention would build a better society than
social intervention. As Nathaniel Comfort, a professor of history of
medicine, writes in Aeon,
eugenics survived only in the form of “preventive medicine for genetic
diseases”—such as screening people for them and, occasionally, treating
them with gene therapy.
Not everyone stopped trying. Taking inspiration from Aldous Huxley’s Brave New World, eugenicist Robert Graham created the Repository for Germinal Choice, a sperm bank for the super-intelligent. The bank existed from 1980 to 1999 and had some 19 high-IQ donors, including at least one Nobel laureate (William Shockley).
The resulting “Genius Babies ”—some 200 of
them—are no different from normal people. One of those conceived through
Graham’s sperm bank told CNN, “There’s only so much you can control when it comes to genetics. It all has to do with what you give to your family.”
Beyond our control
That comment defines the limits of science today. In the 1970s, we finally understood how to tweak the genes of microbes, plants and animals to achieve certain traits. But in humans, with the exception of a few things
such as color blindness or tasting certain foods, “designer baby”
traits, such as greater intelligence, taller stature, stronger muscles,
or better memory, are controlled by hundreds of genes, each of which
also perform many other critical functions. Tools that can deal with
such complexity are still a long way off.
For now, then, the only foreseeable use for
gene-editing is to prevent disease. And the goal is to make genetic
tools good enough to do that without any unintended consequences.
Since 1989, thousands of people have received
experimental gene therapies. Typically these involve the use of a
“vector”—a biological vehicle, such as a virus, that can deliver the
correct copy of a faulty gene to the specific cells in the body affected
by the genetic disorder, such as cancer cells or faulty cells in the
eye.
While most of these treatments have been safe,
only a few have been effective. China approved the world’s first gene
therapy in 2003 to treat certain kinds of cancers. Europe got its first in 2012 that treats a rare inherited disorder (pdf) affecting the pancreas. The US is likely to get its first gene therapy approved in 2016 to treat a form of blindness.
None of these therapies, however, have used the latest advance in gene-editing: CRISPR-Cas9,
a highly precise copy-and-paste tool that allows for the removal and
replacement of individual genes. Since its development in 2012, it has
become an instant favorite among scientists.
CRISPR-Cas9’s immediate potential lies in curing
single-gene disorders in embryos. Changes made at that stage would
affect every cell in the body and could cure many diseases. We know some 4,000 such disorders,
and, though each is rare, put together they could change the lives of
millions in the next generation, and keep many more free from those
diseases for all the generations to follow.
However, the Chinese study earlier this year
showed that we might need something even more precise than CRISPR-Cas9
currently is. Only in one-third of the 86 embryos was the faulty gene
erased as predicted, and even in those cases, CRISPR-Cas9 had also
modified things it wasn’t meant to—the unintended consequences
scientists worry about.
There is, however, another form of genetic
engineering of human embryos. This is the one that the US tried and then
banned, and that the British government recently opened licensing
applications for. And what’s probably more important for the future of
the debate is how Britain decided the technology works and is safe.
Three-parent child
Alana Saarinen was born in the US with three
biological parents. Two of them contribute more than 99% of her genetic
material and the third provides the rest. She is one of only 30 or so people in the world who grew from a genetically engineered embryo into a healthy adult.
Sharon and Paul Saarinen had attempted in-vitro
fertilization (IVF) four times, but without success. A likely reason was
that Sharon’s egg cells had faulty mitochondria. These are like
biological batteries within a cell—they play an essential role in
converting your food into the energy that powers your body. Uniquely,
they also have their own DNA (though it’s only some 37 of the 20,000 or so genes that make up the human genome).
During reproduction, when an egg cell fuses with a
sperm cell, creating the first cell of an embryo, it’s only the DNA in
its nucleus that is a mix of both parents’ DNA. Mitochondria and their
DNA are passed on directly from mother to offspring. Because Sharon
Saarinen’s mitochondria were faulty, she basically needed a mitochondria
transplant in order to conceive.
That was where the third parent came in. A donor
provided an egg cell, whose nucleus was removed and healthy mitochondria
along with other bits of the cell were transferred to Sharon’s egg. The
egg cell was then mixed with Paul’s sperm cells in a normal IVF
procedure, and the resulting embryo would become Alana Saarinen.
This technique won’t only help women like Sharon
conceive. In a lot of cases with faulty mitochondria, pregnancies
proceed normally, but the child then turns out to have one of several
mitochondrial diseases, which can lead to all sorts of problems, from poor growth to autism to diabetes. One in every 5,000 children suffers from one of them.
Mitochondrial replacement therapy like the
Saarinens had is currently the only known way of preventing
mitochondrial diseases. But since they conceived Alana in 2000, only
some 30 or so children have been born using the technique. In 2002, the
US Food and Drug Administration (FDA) banned its use. Apart from ethical
concerns about scientists “playing God,” there was a scientific worry
too. We had never attempted to edit the “germ line”—the DNA that is
transferred from one generation to another—and the risks were unknown.
(About 10% of the pregnancies that resulted from this treatment had
complications, but it wasn’t clear whether the procedure was to blame.)
Selfish genes
The FDA ban meant that US women with faulty mitochondria were left with difficult choices
(pdf). They could choose not to have children, or undergo IVF and pick
the fertilized embryo with the fewest defective mitochondria—taking a
gamble on whether their child would develop mitochondrial disease.
But nearly a decade later the UK government’s
Human Fertilization and Embryology Agency (HFEA) took up the case. In
2012, after taking a detailed look at the results of studies on animals
and humans, it deemed that mitochondrial replacement therapy was “not
unsafe”—meaning that the benefits of curing mitochondrial disease would
outweigh the risks of the procedure.
The interesting thing was what the HFEA did next. In Sept. 2012, it launched a public consultation,
creating a website that explained both the risks and benefits, and
holding public events to do the same. Then it conducted a survey and
asked people to send in their comments online. After the public had
shown broad support for the therapy—and despite stiff opposition
from scholarly groups and religious groups alike—the HFEA spent two
years taking the necessary steps to get the regulations discussed in
parliament. In February, MPs agreed to allow the use of the therapy
under strict guidelines. In October, the process for handing out
licenses began.
We already use genetic engineering to create
climate-resistant crops and drug-producing bacteria. Now one of the
world’s most scientifically advanced countries—and, fittingly, the
birthplace of IVF—has agreed that genetically modified humans, too, are
sometimes not just OK, but desirable. This is what makes 2015 an
historic year.
Based on past progress, it is likely that genetic
enhancements to humans will become a reality step by step. Just like
mitochondrial replacement therapy, they will first appear for a very
narrow purpose, such as curing single-gene disorders, and then, likely
over many decades, we might reach the stage of creating those fabled
designer babies.
That gives us enough time to deliberate the
implications of each step. When our decisions will affect generations of
humans to come, it is important we use that time well. The process that
HFEA designed to win public and political support is a model worth
emulating. If each step were to get the same scrutiny that mitochondrial
replacement therapy got, genetically modified humans could become as
normal as genetically modified crops and bacteria are today—and, barring
the occasional controversy, as widely accepted.
Corrected Dec. 23: An
earlier version of this post incorrectly said that Sharon Saarinen’s
nucleus was implanted in a donor’s egg. It also said that HFEA began
handing out licenses in October, but in fact it then began the process
of handing them out.
http://qz.com/
The pros and cons of genetically engineering your children
From time to time, science troubles philosophers
with difficult ethical questions. But none has been as difficult as
considering permanently altering the genetic code of future generations.
At a meeting that began on Dec. 1 in Washington DC, the world’s leading
gene-editing experts met with ethicists, lawyers, and interested
members of the public to decide whether it should be done.
Gene-editing tools have existed since 1975, when a
meeting of a similar kind was held to discuss the future of genetic
technology. But recent developments have made the technology safe enough
to consider turning science fiction into reality. In fact, in April, Chinese researchers announcedthat
they had conducted experiments to remove genes of an inheritable
disease in human embryos (embryos that were alive but damaged, so they
could not have become babies).
So the stakes are high. By eliminating “bad”
genes from sperm and egg cells—called the “germline”—these tools have
the potential to permanently wipe out diseases caused by single
mutations in genes, such as cystic fibrosis, Huntington’s disease, or
Tay-Sachs.
At the same time, there is huge uncertainty about what could go wrong if seemingly troubling genes are eliminated.
One of the key researchers in the field is
Jennifer Doudna at the University of California, Berkeley. She has been
touted for a Nobel Prize for the development of CRISPR-Cas9, a highly
precise copy-paste genetic tool. In the build-up to the meeting, Doudna
made her concerns clear in Nature:
“Human-germline editing for the
purposes of creating genome-modified humans should not proceed at this
time, partly because of the unknown social consequences, but also
because the technology and our knowledge of the human genome are simply
not ready to do so safely.”
Her sentiments were echoed in a report
released before the meeting by the Center for Genetics and Society.
They believe that research in genetic tools must advance, but only
through therapy for adults (where genetic modifications are targeted at
some cells in the body but not passed on to kids, such as in curing a form of inherited blindness). The report continues:
“But using the same techniques to
modify embryos in order to make permanent, irreversible changes to
future generations and to our common genetic heritage—the human
germline, as it is known—is far more problematic.”
Consider sickle-cell anemia, an occasionally
fatal genetic disorder. Its genes, though clearly harmful, have
persisted and spread because, while having two copies of the sickle-cell
gene causes anemia, having just one copy happens to provide protection
against malaria, one of the most deadly diseases in human history. Had
we not known about their benefits, eliminating sickle-cell genes would
have proved to be a bad idea.
More importantly, there is a worry that once you
allow for designer babies you go down a slippery slope. Emily Smith
Beitiks, disability researcher at the University of California, San
Francisco, said recently:
“These proposed applications raise
social justice questions and put us at risk of reviving
eugenics—controlled breeding to increase the occurrence of ‘desirable’
heritable characteristics. Who gets to decide what diversity looks like
and who is valued?”
But the history of science shows that it is hard
to keep such a cat in the bag. Once developed, technologies have a way
of finding their way into the hands of those who desire to use them.
That worries George Church, a geneticist at Harvard Medical School, who
has been a strong voice in this debate since the beginning. In Nature, he writes:
“Banning human-germlined editing
could put a damper on the best medical research and instead drive the
practice underground to black markets and uncontrolled medical tourism,
which are fraught with much greater risk and misapplication.”
And many believe that the risks of gene-editing
are not that high anyway. Nathaniel Comfort, a historian of medicine at
Johns Hopkins University in Baltimore, writes in Aeon:
“The dishes do not come à la carte. If you believe that made-to-order babies are possible, you oversimplify how genes work.”
That is because abilities, such as intelligence,
height, or personality traits, involve thousands of genes. So there may
be some things that you cannot genetically enhance much, and certainly
not safely. And even knowingly changing the human genome is not as big a
deal as some make it out to be, Church notes:
“Offspring do not consent to their
parents’ intentional exposure to mutagenic sources that alter the germ
line, including chemotherapy, high altitude, and alcohol—nor to
decisions that reduce the prospects for future generations, such as
misdirected economic investment and environmental mismanagement.”
The meeting ended on Dec. 3, and the committee of organizers—10 scientists and two bioethicists—came to a conclusion
on the debate. They believe that the promises of germline editing are
too great to scupper future developments. They endorse that research
should continue in non-human embryos and “if, in the process of
research, early human embryos … undergo gene editing, the modified cells
should not be used to establish a pregnancy.” That is because the
committee believes that we neither know enough about safety issues to
allow any clinical application, nor enough about how society will
respond to the use of this technology in humans.
And, yet, perhaps the the last word on the debate
should go to a woman in the audience at the meeting. Her child died
only six days old after torturous seizures caused by a genetic ailment.
She implored the research community, “If you have the skills and the knowledge to eliminate these diseases, then freakin’ do it!”
Scientists have synthesized a 'minimal genome' with only genes necessary for life
By Chelsea Harvey
A pioneering accomplishment in the
field of genetic research could help scientists gain new insights into
the very definition of life. The new research, published Thursday in the journal Science,
describes the synthetic creation of a “minimal genome” — a cell
containing only the genes absolutely required to keep itself alive.
With just 473 genes, it’s the smallest genome of any living, dividing
cell found in nature and may provide important insights into the
fundamental genetic requirements for life.
The idea of designing and studying a “minimal genome” is a concept
that’s fascinated scientists for decades. In fact, unlocking the secrets
of the genome has been a preoccupation of genetic researchers since the
first genome sequencing was performed on a bacterium in 1995 — the
event that ultimately led to this week’s breakthrough, according to the
new study’s authors.
“This
is a study that had its origins a little over 20 years ago in 1995,
when this institute sequenced the very first genome in history,
Haemophilus influenzae,” said the new paper’s senior author J. Craig
Venter, founder of the J. Craig Venter Institute, which specializes in
genomic research, during a Wednesday teleconference.
Later that same year, the institute also sequenced the genome of a
second type of bacteria, Mycoplasma genitalium. These breakthroughs
allowed for the first genomic comparisons between two different species,
Venter said.
Venter is most famous for his role as a leader of the team that first sequenced the human genome in 2000.
“[My colleagues] and myself were discussing the philosophy of these
differences in the genomes and decided the only way to answer basic
questions about life would be to get to a minimal genome, and probably
the only way to do that would be by trying to synthesize a genome,”
Venter said.
“And that started our 20-year quest to do this.”
The reason that researchers must synthesize, or essentially design
their own, minimal genome is because just about every living organism we
know of contains more genes than are actually necessary for its basic
survival. Even the simplest bacteria contain extra, nonessential genes
that are related to its growth, development and ability to react to its
environment, but that aren’t technically required to keep the cell
alive.
So in order to get down to a truly minimal genome, scientists must
take an existing genetic sequence and pare it down themselves, cutting
away all the nonessential genes until they end up with only the ones
that are absolutely essential
They
do this by creating synthetic genomes — genomes that are designed and
chemically built from the ground up using our existing knowledge of an
organism’s genetic information.
Along the way, scientists can add or delete genetic information as
they see fit. It’s the same basic principle that’s used in genetic
engineering research. But in the case of a minimal genome, the goal is
to slice off as much unnecessary genetic information as possible without
changing or adding anything else to the organism’s genome.
And that’s just what Venter and his colleagues set out to do.
DNA minimalism
They started with the genome of a
type of bacteria known as Mycoplasma mycoides, a parasite normally found
in cows and goats. In 2010, the group succeeded in building the complete M. mycoides genome from scratch and transplanting it into another cell.
This time around, they used a variety of methods to whittle the genome down before transplanting it.
To start, the researchers divided the bacterium’s genome into eight
different segments that could be individually altered and tested — just
to make the experiments a little more manageable. They then applied a
handful of techniques to peel away the nonessential genes.
They call this their “design-build-test” approach.
First, they applied their basic knowledge of genetics and
biochemistry to infer which genes might be safe to remove — but this
technique did not produce viable cells.
The researchers then conducted a series of experiments in which they
inserted bits of foreign genetic information — called transposons — into
the genome in order to disrupt the functions of certain genes and
figure out which ones the cell could do without. This process helped
them whittle down the genome until no more genes could be removed.
Along the way, the researchers were able to divide the bacterium’s
genes into three major categories: essential, nonessential and
quasi-essential, meaning they weren’t absolutely required for life but
were necessary to help the cell grow at a healthy pace.
It allowed them to discover how much we don’t know, even about the core sections of the genome
Venter and his colleagues also discovered that the genome contained a
number of redundant genes — pairs of genes that performed the same
function in the cell. These genes made the whittling process a little
confusing at first — if one of the redundant genes was removed (but not
the other), the cell would continue functioning, tricking the
researchers into believing it was a nonessential gene.
A great deal of trial and error was required in order for the researchers to classify all the genes.
Finally, though, they reached a point where no more genes could be removed without killing the cell.
The result is the smallest genome ever recorded in a self-replicating
— that means alive and able to divide — cell. It contains just 473
genes, all of which are either directly required to keep the cell alive
or to enable it to grow and divide fast enough to be practical for the
researchers’ experiments.
Interestingly,
about a third of the resulting genome consists of genes with unknown
biological functions. Most of the known essential genes perform
functions related to expressing genes, passing down genetic information
from one generation to the next, or performing essential functions in
the cell’s membrane and cytosol, so the scientists predict that the
unknown genes will have similar jobs — we just don’t know what yet.
“One of the great findings but also the great caveats of this work is
that it allowed them to discover how much we don’t know, even about the
core sections of the genome,” said Adam Arkin, director of the Synthetic Biology Institute at the University of California Berkeley, in a statement.
That said, Venter also noted that the concept of a minimal cell is context-dependent.
The specific genes that an organism requires to survive — even an
organism as simple as a bacterial cell — depend on what kind of
environment the cell is living in and what kinds of nutrients are
available to it.
And, of course, one species’ minimal genome would likely differ significantly from that of another species.
With that in mind, exploring different forms of minimal genomes could have important industrial applications, said Daniel Gibson, another of the study’s authors and another scientist at the J. Craig Venter Institute, during the same teleconference.
Because these cells are so simple, devote all their energy to
essential functions and are subject to very few genetic mutations, they
are “straightforward to engineer” and could provide helpful insights
into more complex types of biosynthesis in the future, he said.
Still, there’s plenty of work left to be done before the study of minimal genomes may yield practical applications.
“The major limitation is that this is the beginning of a very long road,” said Sriram Kosuri, an assistant professor of biochemistry at UCLA, in a statement.
“It's not as if this new minimal genome will automatically lead to
either fundamental insights or industrial applications immediately. That
said, they've created a self-replicating biological organism that might
be a better starting point for such scientific and engineering goals
than continuing to study natural systems."