Showing posts with label Habitats and Biome. Show all posts
Showing posts with label Habitats and Biome. Show all posts

Sunday, April 10, 2016

Chinese researchers have genetically modified human embryos—yet again

http://qz.com

Less than a year ago, when a group of leading researchers was calling for a moratorium on the use of a revolutionary technology, Chinese researchers shocked the world by using it to genetically modify human embryos. The worry was that unfettered access to the technology might enable such embryos to become fully grown humans, who will then pass on mutations to all their offspring. The risk of unintended consequences seemed too great.
Now a different group of Chinese researchers have again wielded the technology to genetically modify human embryos. This time, however, the reaction from some scientists is just an annoyed shrug. Clearly a lot happened in the last year for perceptions to change so drastically.
The technology in question is called CRISPR, and it allows researchers to make genetic modifications with greater precision than ever before. In 2015, Chinese researchers used CRISPR to target genes responsible for a blood disorder called β-thalassaemia. They were only able to replace the defective gene in 28 out 71 embryos. Worse still, it left a slew of unintended changes in other parts of the genome.
In the latest attempt, researchers at Guangzhou Medical University have gone a step ahead. Instead of trying to correct mutations that could cause disease, they used CRISPR technology to insert a genetic mutation which might offer resistance against HIV.
The mutation was targeted in the CCR5 gene, which is responsible for producing a protein that HIV uses to latch on, enter, and infect a human immune cell. If the CCR5 gene were mutated, the logic goes, the HIV virus would not be able to infect—and thus the mutation would confer resistance to the disease.
The researchers report in the Journal of Assisted Reproduction and Genetics that they were successfully inserted the mutated gene in four out of 26 embryos. And, even in the successful cases, not all copies of the CCR5 gene were modified. In other cases mutations were caused that weren’t intended.
The experiment had been approved by a local ethics committee, which ensured that the study followed Chinese government guidelines. All the experimental human embryos were “non-viable,” which means they would have been unable to become fully grown humans. Such abnormal embryos are an inevitable part of in-vitro fertilization therapy, where sometimes two sperms insert their DNA into a single egg.
“The results are both comforting and disturbing,” says Peter Donovan of the University of California at Irvine. “The good news is that the technique worked for this group in the same way that it did for the first group… an important part of the scientific process showing it wasn’t a fluke the first time. The salutary lesson is that there is still much to be learned about gene editing in human embryos before it is ready for prime time.”
The rate of failure has made some bioethicists and scientists question the motives of Chinese researchers who continue to test CRISPR in human embryos. They argue that, while CRISPR offers greater precision, it still isn’t ready for testing in human embryos. Others, like Donovan, maintain that it will be studies using donated human embryos that will give us the most understanding.
Despite the divided opinion, there is definitely a change in perception. The first study reporting the genetic modification of human embryos resulted in a summit held in November between the science academies of China, the US and the UK. After days of deliberation, the world’s leading geneticists agreed that, while no CRISPR-modified embryos should become full human beings, research using human embryos can continue.
The Chinese group that did the latest work insists that their “proof-of-concept” may provide solutions to improving human health. They write, “Despite the significant scientific and ethical issues involved, however, we believe that it is necessary to keep developing and improving the technologies for precise genetic modifications in humans.

http://qz.com/

2015 was the year it became OK to genetically engineer babies


December 22, 2015
In April of this year, researchers in China published the results of an experiment in modifying the DNA of human embryos. Though the embryos they worked on were damaged ones that could not have grown into living babies, it sent a tremor through the scientific establishment.
Just a month earlier, a group of leading geneticists had called for a moratorium on gene-editing in embryos. Since any genetic changes would get passed on to future generations, they argued, the risk of unintended consequences was too great. And in November, at a summit in Washington DC, scientists from across the world agreed that, while research should continue, it’s still too risky to allow any altered embryos to grow into full human beings.
And, yet, when historians of science look back decades from now, they may well mark 2015 as the year genetically engineering humans became acceptable. That’s because, while the world was paying attention to the gene-editing summit, a more momentous decision had been made just a month earlier in the UK. There, a governmental body got ready to hand out licenses for creating a particular kind of genetically engineered human—using a technique the US tried and then banned 13 years ago.
This technique won’t create the fabled “designer babies” just yet. But the changes made to an embryo will be hereditary, and thus alter the genetic makeup of all the offspring to follow. The story of how we got here, and what will come next, is why 2015 will be remembered as a turning-point.


Just getting better

Our ability to do some form of genetic engineering goes back 40,000 years. Selective breeding created a tamer and more likable version of a wolf, the common ancestor of all today’s dogs.
Our desire to design better humans is also old. In Plato’s Republic, Socrates calls for a state-run program to get the best citizens to mate so that the population could be improved.
By the 19th century, the ideology of eugenics—a word not invented by Plato, but coined much later from the Greek for “good breeding”—had taken such a hold that countries were passing laws for such programs. Before World War II, 30 states in the US had passed some form of eugenics laws that mandated sexual sterilization of those deemed unfit (typically the mentally ill).
Only after the horror of Hitler’s genocide did the world recoil from eugenics. Most geneticists never returned to the idea that biological intervention would build a better society than social intervention. As Nathaniel Comfort, a professor of history of medicine, writes in Aeon, eugenics survived only in the form of “preventive medicine for genetic diseases”—such as screening people for them and, occasionally, treating them with gene therapy.
Not everyone stopped trying. Taking inspiration from Aldous Huxley’s Brave New World, eugenicist Robert Graham created the Repository for Germinal Choice, a sperm bank for the super-intelligent. The bank existed from 1980 to 1999 and had some 19 high-IQ donors, including at least one Nobel laureate (William Shockley).
The resulting “Genius Babies ”—some 200 of them—are no different from normal people. One of those conceived through Graham’s sperm bank told CNN, “There’s only so much you can control when it comes to genetics. It all has to do with what you give to your family.”


Beyond our control

That comment defines the limits of science today. In the 1970s, we finally understood how to tweak the genes of microbes, plants and animals to achieve certain traits. But in humans, with the exception of a few things such as color blindness or tasting certain foods, “designer baby” traits, such as greater intelligence, taller stature, stronger muscles, or better memory, are controlled by hundreds of genes, each of which also perform many other critical functions. Tools that can deal with such complexity are still a long way off.
For now, then, the only foreseeable use for gene-editing is to prevent disease. And the goal is to make genetic tools good enough to do that without any unintended consequences.
Since 1989, thousands of people have received experimental gene therapies. Typically these involve the use of a “vector”—a biological vehicle, such as a virus, that can deliver the correct copy of a faulty gene to the specific cells in the body affected by the genetic disorder, such as cancer cells or faulty cells in the eye.
While most of these treatments have been safe, only a few have been effective. China approved the world’s first gene therapy in 2003 to treat certain kinds of cancers. Europe got its first in 2012 that treats a rare inherited disorder (pdf) affecting the pancreas. The US is likely to get its first gene therapy approved in 2016 to treat a form of blindness.
None of these therapies, however, have used the latest advance in gene-editing: CRISPR-Cas9, a highly precise copy-and-paste tool that allows for the removal and replacement of individual genes. Since its development in 2012, it has become an instant favorite among scientists.
CRISPR-Cas9’s immediate potential lies in curing single-gene disorders in embryos. Changes made at that stage would affect every cell in the body and could cure many diseases. We know some 4,000 such disorders, and, though each is rare, put together they could change the lives of millions in the next generation, and keep many more free from those diseases for all the generations to follow.
However, the Chinese study earlier this year showed that we might need something even more precise than CRISPR-Cas9 currently is. Only in one-third of the 86 embryos was the faulty gene erased as predicted, and even in those cases, CRISPR-Cas9 had also modified things it wasn’t meant to—the unintended consequences scientists worry about.
There is, however, another form of genetic engineering of human embryos. This is the one that the US tried and then banned, and that the British government recently opened licensing applications for. And what’s probably more important for the future of the debate is how Britain decided the technology works and is safe.


Three-parent child

Alana Saarinen was born in the US with three biological parents. Two of them contribute more than 99% of her genetic material and the third provides the rest. She is one of only 30 or so people in the world who grew from a genetically engineered embryo into a healthy adult.
Sharon and Paul Saarinen had attempted in-vitro fertilization (IVF) four times, but without success. A likely reason was that Sharon’s egg cells had faulty mitochondria. These are like biological batteries within a cell—they play an essential role in converting your food into the energy that powers your body. Uniquely, they also have their own DNA (though it’s only some 37 of the 20,000 or so genes that make up the human genome).
During reproduction, when an egg cell fuses with a sperm cell, creating the first cell of an embryo, it’s only the DNA in its nucleus that is a mix of both parents’ DNA. Mitochondria and their DNA are passed on directly from mother to offspring. Because Sharon Saarinen’s mitochondria were faulty, she basically needed a mitochondria transplant in order to conceive.
That was where the third parent came in. A donor provided an egg cell, whose nucleus was removed and healthy mitochondria along with other bits of the cell were transferred to Sharon’s egg. The egg cell was then mixed with Paul’s sperm cells in a normal IVF procedure, and the resulting embryo would become Alana Saarinen.
This technique won’t only help women like Sharon conceive. In a lot of cases with faulty mitochondria, pregnancies proceed normally, but the child then turns out to have one of several mitochondrial diseases, which can lead to all sorts of problems, from poor growth to autism to diabetes. One in every 5,000 children suffers from one of them.
Mitochondrial replacement therapy like the Saarinens had is currently the only known way of preventing mitochondrial diseases. But since they conceived Alana in 2000, only some 30 or so children have been born using the technique. In 2002, the US Food and Drug Administration (FDA) banned its use. Apart from ethical concerns about scientists “playing God,” there was a scientific worry too. We had never attempted to edit the “germ line”—the DNA that is transferred from one generation to another—and the risks were unknown. (About 10% of the pregnancies that resulted from this treatment had complications, but it wasn’t clear whether the procedure was to blame.)

Selfish genes

The FDA ban meant that US women with faulty mitochondria were left with difficult choices (pdf). They could choose not to have children, or undergo IVF and pick the fertilized embryo with the fewest defective mitochondria—taking a gamble on whether their child would develop mitochondrial disease.
But nearly a decade later the UK government’s Human Fertilization and Embryology Agency (HFEA) took up the case. In 2012, after taking a detailed look at the results of studies on animals and humans, it deemed that mitochondrial replacement therapy was “not unsafe”—meaning that the benefits of curing mitochondrial disease would outweigh the risks of the procedure.
The interesting thing was what the HFEA did next. In Sept. 2012, it launched a public consultation, creating a website that explained both the risks and benefits, and holding public events to do the same. Then it conducted a survey and asked people to send in their comments online. After the public had shown broad support for the therapy—and despite stiff opposition from scholarly groups and religious groups alike—the HFEA spent two years taking the necessary steps to get the regulations discussed in parliament. In February, MPs agreed to allow the use of the therapy under strict guidelines. In October, the process for handing out licenses began.
We already use genetic engineering to create climate-resistant crops and drug-producing bacteria. Now one of the world’s most scientifically advanced countries—and, fittingly, the birthplace of IVF—has agreed that genetically modified humans, too, are sometimes not just OK, but desirable. This is what makes 2015 an historic year.
Based on past progress, it is likely that genetic enhancements to humans will become a reality step by step. Just like mitochondrial replacement therapy, they will first appear for a very narrow purpose, such as curing single-gene disorders, and then, likely over many decades, we might reach the stage of creating those fabled designer babies.
That gives us enough time to deliberate the implications of each step. When our decisions will affect generations of humans to come, it is important we use that time well. The process that HFEA designed to win public and political support is a model worth emulating. If each step were to get the same scrutiny that mitochondrial replacement therapy got, genetically modified humans could become as normal as genetically modified crops and bacteria are today—and, barring the occasional controversy, as widely accepted.
Corrected Dec. 23: An earlier version of this post incorrectly said that Sharon Saarinen’s nucleus was implanted in a donor’s egg. It also said that HFEA began handing out licenses in October, but in fact it then began the process of handing them out.

 http://qz.com/

The pros and cons of genetically engineering your children



December 03, 2015
From time to time, science troubles philosophers with difficult ethical questions. But none has been as difficult as considering permanently altering the genetic code of future generations. At a meeting that began on Dec. 1 in Washington DC, the world’s leading gene-editing experts met with ethicists, lawyers, and interested members of the public to decide whether it should be done.
Gene-editing tools have existed since 1975, when a meeting of a similar kind was held to discuss the future of genetic technology. But recent developments have made the technology safe enough to consider turning science fiction into reality. In fact, in April, Chinese researchers announced that they had conducted experiments to remove genes of an inheritable disease in human embryos (embryos that were alive but damaged, so they could not have become babies).
So the stakes are high. By eliminating “bad” genes from sperm and egg cells—called the “germline”—these tools have the potential to permanently wipe out diseases caused by single mutations in genes, such as cystic fibrosis, Huntington’s disease, or Tay-Sachs.
At the same time, there is huge uncertainty about what could go wrong if seemingly troubling genes are eliminated.
One of the key researchers in the field is Jennifer Doudna at the University of California, Berkeley. She has been touted for a Nobel Prize for the development of CRISPR-Cas9, a highly precise copy-paste genetic tool. In the build-up to the meeting, Doudna made her concerns clear in Nature:
“Human-germline editing for the purposes of creating genome-modified humans should not proceed at this time, partly because of the unknown social consequences, but also because the technology and our knowledge of the human genome are simply not ready to do so safely.”
Her sentiments were echoed in a report released before the meeting by the Center for Genetics and Society. They believe that research in genetic tools must advance, but only through therapy for adults (where genetic modifications are targeted at some cells in the body but not passed on to kids, such as in curing a form of inherited blindness). The report continues:
“But using the same techniques to modify embryos in order to make permanent, irreversible changes to future generations and to our common genetic heritage—the human germline, as it is known—is far more problematic.”
Consider sickle-cell anemia, an occasionally fatal genetic disorder. Its genes, though clearly harmful, have persisted and spread because, while having two copies of the sickle-cell gene causes anemia, having just one copy happens to provide protection against malaria, one of the most deadly diseases in human history. Had we not known about their benefits, eliminating sickle-cell genes would have proved to be a bad idea.
More importantly, there is a worry that once you allow for designer babies you go down a slippery slope. Emily Smith Beitiks, disability researcher at the University of California, San Francisco, said recently:
“These proposed applications raise social justice questions and put us at risk of reviving eugenics—controlled breeding to increase the occurrence of ‘desirable’ heritable characteristics. Who gets to decide what diversity looks like and who is valued?”
But the history of science shows that it is hard to keep such a cat in the bag. Once developed, technologies have a way of finding their way into the hands of those who desire to use them. That worries George Church, a geneticist at Harvard Medical School, who has been a strong voice in this debate since the beginning. In Nature, he writes:
“Banning human-germlined editing could put a damper on the best medical research and instead drive the practice underground to black markets and uncontrolled medical tourism, which are fraught with much greater risk and misapplication.”
And many believe that the risks of gene-editing are not that high anyway. Nathaniel Comfort, a historian of medicine at Johns Hopkins University in Baltimore, writes in Aeon:
“The dishes do not come à la carte. If you believe that made-to-order babies are possible, you oversimplify how genes work.”
That is because abilities, such as intelligence, height, or personality traits, involve thousands of genes. So there may be some things that you cannot genetically enhance much, and certainly not safely. And even knowingly changing the human genome is not as big a deal as some make it out to be, Church notes:
“Offspring do not consent to their parents’ intentional exposure to mutagenic sources that alter the germ line, including chemotherapy, high altitude, and alcohol—nor to decisions that reduce the prospects for future generations, such as misdirected economic investment and environmental mismanagement.”
The meeting ended on Dec. 3, and the committee of organizers—10 scientists and two bioethicists—came to a conclusion on the debate. They believe that the promises of germline editing are too great to scupper future developments. They endorse that research should continue in non-human embryos and “if, in the process of research, early human embryos … undergo gene editing, the modified cells should not be used to establish a pregnancy.” That is because the committee believes that we neither know enough about safety issues to allow any clinical application, nor enough about how society will respond to the use of this technology in humans.
And, yet, perhaps the the last word on the debate should go to a woman in the audience at the meeting. Her child died only six days old after torturous seizures caused by a genetic ailment. She implored the research community, “If you have the skills and the knowledge to eliminate these diseases, then freakin’ do it!”

Scientists have synthesized a 'minimal genome' with only genes necessary for life

A pioneering accomplishment in the field of genetic research could help scientists gain new insights into the very definition of life. The new research, published Thursday in the journal Science, describes the synthetic creation of a “minimal genome” — a cell containing only the genes absolutely required to keep itself alive.
With just 473 genes, it’s the smallest genome of any living, dividing cell found in nature and may provide important insights into the fundamental genetic requirements for life.
The idea of designing and studying a “minimal genome” is a concept that’s fascinated scientists for decades. In fact, unlocking the secrets of the genome has been a preoccupation of genetic researchers since the first genome sequencing was performed on a bacterium in 1995 — the event that ultimately led to this week’s breakthrough, according to the new study’s authors.
Researchers have designed and synthesized a minimal bacterial
genome, containing only the genes necessary for life.
Image: C. Bickel/Science
“This is a study that had its origins a little over 20 years ago in 1995, when this institute sequenced the very first genome in history, Haemophilus influenzae,” said the new paper’s senior author J. Craig Venter, founder of the J. Craig Venter Institute, which specializes in genomic research, during a Wednesday teleconference.
Later that same year, the institute also sequenced the genome of a second type of bacteria, Mycoplasma genitalium. These breakthroughs allowed for the first genomic comparisons between two different species, Venter said.
Venter is most famous for his role as a leader of the team that first sequenced the human genome in 2000.
“[My colleagues] and myself were discussing the philosophy of these differences in the genomes and decided the only way to answer basic questions about life would be to get to a minimal genome, and probably the only way to do that would be by trying to synthesize a genome,” Venter said.

Image: Giphy
“And that started our 20-year quest to do this.”
The reason that researchers must synthesize, or essentially design their own, minimal genome is because just about every living organism we know of contains more genes than are actually necessary for its basic survival. Even the simplest bacteria contain extra, nonessential genes that are related to its growth, development and ability to react to its environment, but that aren’t technically required to keep the cell alive.
So in order to get down to a truly minimal genome, scientists must take an existing genetic sequence and pare it down themselves, cutting away all the nonessential genes until they end up with only the ones that are absolutely essential

They do this by creating synthetic genomes — genomes that are designed and chemically built from the ground up using our existing knowledge of an organism’s genetic information.
Along the way, scientists can add or delete genetic information as they see fit. It’s the same basic principle that’s used in genetic engineering research. But in the case of a minimal genome, the goal is to slice off as much unnecessary genetic information as possible without changing or adding anything else to the organism’s genome.
And that’s just what Venter and his colleagues set out to do.

DNA minimalism

They started with the genome of a type of bacteria known as Mycoplasma mycoides, a parasite normally found in cows and goats. In 2010, the group succeeded in building the complete M. mycoides genome from scratch and transplanting it into another cell.
J. Craig Venter receives the National Medal of Science on
October 7, 2009 in Washington, DC.
Image: Getty Images
This time around, they used a variety of methods to whittle the genome down before transplanting it.
To start, the researchers divided the bacterium’s genome into eight different segments that could be individually altered and tested — just to make the experiments a little more manageable. They then applied a handful of techniques to peel away the nonessential genes.
They call this their “design-build-test” approach.
First, they applied their basic knowledge of genetics and biochemistry to infer which genes might be safe to remove — but this technique did not produce viable cells.
The researchers then conducted a series of experiments in which they inserted bits of foreign genetic information — called transposons — into the genome in order to disrupt the functions of certain genes and figure out which ones the cell could do without. This process helped them whittle down the genome until no more genes could be removed.
Along the way, the researchers were able to divide the bacterium’s genes into three major categories: essential, nonessential and quasi-essential, meaning they weren’t absolutely required for life but were necessary to help the cell grow at a healthy pace.
It allowed them to discover how much we don’t know, even about the core sections of the genome
Venter and his colleagues also discovered that the genome contained a number of redundant genes — pairs of genes that performed the same function in the cell. These genes made the whittling process a little confusing at first — if one of the redundant genes was removed (but not the other), the cell would continue functioning, tricking the researchers into believing it was a nonessential gene.
A great deal of trial and error was required in order for the researchers to classify all the genes.
Finally, though, they reached a point where no more genes could be removed without killing the cell.
The result is the smallest genome ever recorded in a self-replicating — that means alive and able to divide — cell. It contains just 473 genes, all of which are either directly required to keep the cell alive or to enable it to grow and divide fast enough to be practical for the researchers’ experiments.
J. Craig Venter poses before a gene map of a flu-causing
bacterium in his Rockville, Md., office, March 12, 1997.
Image: Ruth Fremson/AP
Interestingly, about a third of the resulting genome consists of genes with unknown biological functions. Most of the known essential genes perform functions related to expressing genes, passing down genetic information from one generation to the next, or performing essential functions in the cell’s membrane and cytosol, so the scientists predict that the unknown genes will have similar jobs — we just don’t know what yet.
“One of the great findings but also the great caveats of this work is that it allowed them to discover how much we don’t know, even about the core sections of the genome,” said Adam Arkin, director of the Synthetic Biology Institute at the University of California Berkeley, in a statement.
That said, Venter also noted that the concept of a minimal cell is context-dependent.
The specific genes that an organism requires to survive — even an organism as simple as a bacterial cell — depend on what kind of environment the cell is living in and what kinds of nutrients are available to it.
And, of course, one species’ minimal genome would likely differ significantly from that of another species.
With that in mind, exploring different forms of minimal genomes could have important industrial applications, said Daniel Gibson, another of the study’s authors and another scientist at the J. Craig Venter Institute, during the same teleconference.
Because these cells are so simple, devote all their energy to essential functions and are subject to very few genetic mutations, they are “straightforward to engineer” and could provide helpful insights into more complex types of biosynthesis in the future, he said.
Still, there’s plenty of work left to be done before the study of minimal genomes may yield practical applications.
“The major limitation is that this is the beginning of a very long road,” said Sriram Kosuri, an assistant professor of biochemistry at UCLA, in a statement.
“It's not as if this new minimal genome will automatically lead to either fundamental insights or industrial applications immediately. That said, they've created a self-replicating biological organism that might be a better starting point for such scientific and engineering goals than continuing to study natural systems."
 
 

Scientists May Have Found the Key to Curing Autism, Cancer and HIV

Gene editing tool CRISPR-Cas9 has made it possible to isolate RNA in living cells for the first time.

160323_dna


Mutations in RNA are linked to autism, cancer, and fragile X syndrome.
By + More

The cures for some of the world's most perplexing diseases might be closer than we think.
According to a study published in Cell, researchers have determined how to isolate and edit messenger RNA that carries genetic instructions from the cell's nucleus to make new proteins for the first time using gene-editing tool Clustered Regularly Interspaced Short Palindromic Repeats, also known as CRISPR-Cas9.

They have previously used this tool to remove HIV from human immune cells and shut down HIV replication permanently, according to a study published in Nature in March.
“It opens up a new area of thinking about manipulating genes and disease,” Gene Yeo, associate professor of molecular medicine at UCSD and a senior author of the study, told Discovery. “In many diseases you cannot edit the genome, you can break the genome into pieces. But here we are doing transcription engineering or editing. That’s quite exciting.”
The gene-editing technique could lead to treatments for diseases that are linked to defective RNA and have previously been untreatable. These include certain cancers, fragile X syndrome and autism.
[READ: FDA Orders Warning Labels on Prescription Narcotic Painkillers]
CRISPR-Cas9 can also potentially be used to edit genes that determine our physical features and maybe even our personality, leading to ethical questions about how to responsibly use the technology.
Discovery reports that the National Academy of Sciences is working on a set of ethical rules for this burgeoning field.

Wednesday, October 2, 2013

Biome

iome

The planet Earth
 
Biomes are climatically and geographically defined as contiguous areas with similar climatic conditions on the Earth, such as communities of plants, animals, and soil organisms,[1] and are often referred to as ecosystems. Some parts of the earth have more or less the same kind of abiotic and biotic factors spread over a large area, creating a typical ecosystem over that area. Such major ecosystems are termed as biomes. Biomes are defined by factors such as plant structures (such as trees, shrubs, and grasses), leaf types (such as broadleaf and needleleaf), plant spacing (forest, woodland, savanna), and climate. Unlike ecozones, biomes are not defined by genetic, taxonomic, or historical similarities. Biomes are often identified with particular patterns of ecological succession and climax vegetation (quasiequilibrium state of the local ecosystem). An ecosystem has many biotopes and a biome is a major habitat type. A major habitat type, however, is a compromise, as it has an intrinsic inhomogeneity. Some examples of habitats are ponds, trees, streams, creeks, under rocks and burrows in the sand or soil.
The biodiversity characteristic of each extinction, especially the diversity of fauna and subdominant plant forms, is a function of abiotic factors and the biomass productivity of the dominant vegetation. In terrestrial biomes, species diversity tends to correlate positively with net primary productivity, moisture availability, and temperature.[2]
Ecoregions are grouped into both biomes and ecozones.
A fundamental classification of biomes are:
  1. Terrestrial (land) biomes
  2. Aquatic biomes (including freshwater biomes and marine biomes)
Biomes are often known in English by local names. For example, a temperate grassland or shrubland biome is known commonly as steppe in central Asia, prairie in North America, and pampas in South America. Tropical grasslands are known as savanna in Australia, whereas in southern Africa it is known as certain kinds of veld (from Afrikaans).
Sometimes an entire biome may be targeted for protection, especially under an individual nation's biodiversity action plan.
Climate is a major factor determining the distribution of terrestrial biomes. Among the important climatic factors are:
  • Latitude: Arctic, boreal, temperate, subtropical, tropical
  • Humidity: humid, semihumid, semiarid, and arid
    • seasonal variation: Rainfall may be distributed evenly throughout the year or be marked by seasonal variations.
    • dry summer, wet winter: Most regions of the earth receive most of their rainfall during the summer months; Mediterranean climate regions receive their rainfall during the winter months.
  • Elevation: Increasing elevation causes a distribution of habitat types similar to that of increasing latitude.
The most widely used systems of classifying biomes correspond to latitude (or temperature zoning) and humidity. Biodiversity generally increases away from the poles towards the equator and increases with humidity.

Biome classification schemes

In this scheme, climates are classified based on the biological effects of temperature and rainfall on vegetation under the assumption that these two abiotic factors are the largest determinants of the type of vegetation found in an area. Holdridge uses the four axes to define 30 so-called "humidity provinces", which are clearly visible in the Holdridge diagram. While his scheme largely ignores soil and sun exposure, Holdridge did acknowledge that these, too, were important factors in biome determination.

Holdridge scheme

Biomes are classification schemes defined by climatic parameters. Particularly in the 1970s and 1980s, there was a significant push to understand the relationships between these climatic parameters and properties of ecosystem energetics because such discoveries would enable the prediction of rates of energy capture and transfer among components within ecosystems. Such a study was conducted by Sims et al. (1978) on North American grasslands. The study found a positive logistic correlation between evapotranspiration in mm/yr and above-ground net primary production in g/m2/yr. More general results from the study were that precipitation and water use lead to above-ground primary production, solar radiation and temperature lead to belowground primary production (roots), and temperature and water lead to cool and warm season growth habit.[3] These findings help explain the categories used in Holdridge’s bioclassification scheme, which were then later simplified in Whittaker’s. The number of classification schemes and the variety of determinants used in those schemes, however, should be taken as strong indicators that biomes do not all fit perfectly into the classification schemes created.

Whittaker's biome-type classification scheme

Whittaker appreciated biome-types as a representation of the great diversity of the living world, and saw the need to establish a simple way to classify them. He based his classification scheme on two abiotic factors: precipitation and temperature. His scheme can be seen as a simplification of Holdridge's, one more readily accessible, but perhaps missing the greater specificity that Holdridge's provides.
Whittaker based his representation of global biomes on both previous theoretical assertions and an ever-increasing empirical sampling of global ecosystems. He was in a unique position to make such a holistic assertion because he had previously compiled a review of biome classification.[4]

Key definitions for understanding Whittaker's scheme

  • Physiognomy: The apparent characteristics, outward features, or appearance of ecological communities or species
  • Biome: a grouping of terrestrial ecosystems on a given continent that are similar in vegetation structure, physiognomy, features of the environment and characteristics of their animal communities
  • Formation: a major kind of community of plants on a given continent
  • Biome-type: grouping of convergent biomes or formations of different continents, defined by physiognomy
  • Formation-type: a grouping of convergent formations
Whittaker's distinction between biome and formation can be simplified: formation is used when applied to plant communities only, while biome is used when concerned with both plants and animals. Whittaker's convention of biome-type or formation-type is simply a broader method to categorize similar communities.[5]

Whittaker's parameters for classifying biome-types

Whittaker, seeing the need for a simpler way to express the relationship of community structure to the environment, used what he called “gradient analysis” of ecocline patterns to relate communities to climate on a worldwide scale. Whittaker considered four main ecoclines in the terrestrial realm.[5]
  1. Intertidal levels: The wetness gradient of areas that are exposed to alternating water and dryness with intensities that vary by location from high to low tide
  2. Climatic moisture gradient
  3. Temperature gradient by altitude
  4. Temperature gradient by latitude
Along these gradients, Whittaker noted several trends that allowed him to qualitatively establish biome-types.
  • The gradient runs from favorable to extreme, with corresponding changes in productivity.
  • Changes in physiognomic complexity vary with the favorability of the environment (decreasing community structure and reduction of stratal differentiation as the environment becomes less favorable).
  • Trends in diversity of structure follow trends in species diversity; alpha and beta species diversities decrease from favorable to extreme environments.
  • Each growth-form (i.e. grasses, shrubs, etc.) has its characteristic place of maximum importance along the ecoclines.
  • The same growth forms may be dominant in similar environments in widely different parts of the world.
Whittaker summed the effects of gradients (3) and (4) to get an overall temperature gradient, and combined this with gradient (2), the moisture gradient, to express the above conclusions in what is known as the Whittaker classification scheme. The scheme graphs average annual precipitation (x-axis) versus average annual temperature (y-axis) to classify biome-types.

Walter system

The Heinrich Walter classification scheme, developed by Heinrich Walter, a German ecologist, differs from both the Whittaker and Holdridge schemes because it takes into account the seasonality of temperature and precipitation. The system, also based on precipitation and temperature, finds 9 major biomes, with the important climate traits and vegetation types summarized in the accompanying table. The boundaries of each biome correlate to the conditions of moisture and cold stress that are strong determinants of plant form, and therefore the vegetation that defines the region. Extreme conditions, such as flooding in a swamp, can create different kinds of communities within the same biome.
  • I: Equatorial
    • Always moist and lacking temperature seasonality
    • Evergreen tropical rain forest
  • II: Tropical
    • Summer rainy season and cooler “winter” dry season
    • Seasonal forest, scrub, or savanna
  • III: Subtropical
    • Highly seasonal, arid climate
    • Desert vegetation with considerable exposed surface
  • IV: Mediterranean
    • Winter rainy season and summer drought
    • Sclerophyllous (drought-adapted), frost-sensitive shrublands and woodlands
  • V: Warm temperate
    • Occasional frost, often with summer rainfall maximum
    • Temperate evergreen forest, somewhat frost-sensitive
  • VI: Nemoral
    • Moderate climate with winter freezing
    • Frost-resistant, deciduous, temperate forest
  • VII: Continental
    • Arid, with warm or hot summers and cold winters
    • Grasslands and temperate deserts
  • VIII: Boreal
    • Cold temperate with cool summers and long winters
    • Evergreen, frost-hardy, needle-leaved forest (taiga)
  • IX: Polar
    • Very short, cool summers and long, very cold winters
    • Low, evergreen vegetation, without trees, growing over permanently frozen soils

Bailey system

Robert G. Bailey almost developed a biogeographical classification system for the United States in a map published in 1976. He subsequently expanded the system to include the rest of North America in 1981, and the world in 1989. The Bailey system, based on climate, is divided into seven domains (polar, humid temperate, dry, humid, and humid tropical), with further divisions based on other climate characteristics (subarctic, warm temperate, hot temperate, and subtropical; marine and continental; lowland and mountain).[6]
  • 100 Polar Domain
    • 120 Tundra Division (Koppen: Ft)
    • M120 Tundra Division – Mountain Provinces
    • 130 Subarctic Division (Koppen: E)
    • M130 Subarctic Division – Mountain Provinces
  • 200 Humid Temperate Domain
    • 210 Warm Continental Division (Koppen: portion of Dcb)
    • M210 Warm Continental Division – Mountain Provinces
    • 220 Hot Continental Division (Koppen: portion of Dca)
    • M220 Hot Continental Division – Mountain Provinces
    • 230 Subtropical Division (Koppen: portion of Cf)
    • M230 Subtropical Division – Mountain Provinces
    • 240 Marine Division (Koppen: Do)
    • M240 Marine Division – Mountain Provinces
    • 250 Prairie Division (Koppen: arid portions of Cf, Dca, Dcb)
    • 260 Mediterranean Division (Koppen: Cs)
    • M260 Mediterranean Division – Mountain Provinces
  • 300 Dry Domain
    • 310 Tropical/Subtropical Steppe Division
    • M310 Tropical/Subtropical Steppe Division – Mountain Provinces
    • 320 Tropical/Subtropical Desert Division
    • 330 Temperate Steppe Division
    • 340 Temperate Desert Division
  • 400 Humid Tropical Domain
    • 410 Savanna Division
    • 420 Rainforest Division

WWF system

A team of biologists convened by the World Wide Fund for Nature (WWF) developed an ecological land classification system that identified fourteen biomes,[7] called major habitat types, and further divided the world's land area into 867 terrestrial ecoregions. Each terrestrial ecoregion has a specific EcoID, format XXnnNN (XX is the ecozone, nn is the biome number, NN is the individual number). This classification is used to define the Global 200 list of ecoregions identified by the WWF as priorities for conservation. The WWF major habitat types are:

Freshwater biomes

According to the WWF, the following are classified as freshwater biomes:[8]
  • Streams and rivers

Realms or ecozones (terrestrial and freshwater, WWF)

Marine biomes

Marine biomes (H) (major habitat types), Global 200 (WWF)
Biomes of the coastal and continental shelf areas (neritic zoneList of ecoregions (WWF))
Realms or ecozones (marine, WWF)
  • North temperate Atlantic
  • Eastern tropical Atlantic
  • Western tropical Atlantic
  • South temperate Atlantic
  • North temperate Indo-Pacific
  • Central Indo-Pacific
  • Eastern Indo-Pacific
  • Western Indo-Pacific
  • South temperate Indo-Pacific
  • Southern Ocean
  • Antarctic
  • Arctic
  • Mediterranean
Other marine habitat types
Major habitats, nonglobal 200 (WWF)

Summary – ecological taxonomy (WWF)

Example

Anthropogenic biomes

Humans have fundamentally altered global patterns of biodiversity and ecosystem processes. As a result, vegetation forms predicted by conventional biome systems are rarely observed across most of Earth's land surface. Anthropogenic biomes provide an alternative view of the terrestrial biosphere based on global patterns of sustained direct human interaction with ecosystems, including agriculture, human settlements, urbanization, forestry and other uses of land. Anthropogenic biomes offer a new way forward in ecology and conservation by recognizing the irreversible coupling of human and ecological systems at global scales and moving us toward an understanding how best to live in and manage our biosphere and the anthropogenic biosphere we live in. The main biomes in the world are freshwater, marine, coniferous, deciduous, ice, mountains, boreal, grasslands, tundra, and rainforests.

Major anthropogenic biomes

  • Dense settlements
  • Villages
  • Croplands
  • Rangelands
  • Forested

Wednesday, January 2, 2013

Habitats and Biome



Habitats and Biome

  The Earth has many different kinds of habitats with diverse complex communities of interdependent plants and animals living in it. These are called biomes. Coniferous forest, deserts, grasslands tundra, mountains, rain forests, caves, wetlands, coral reefs are just some examples of Earths many different biomes.