Embryology – Wikipedia

This article is about the development of embryos in animals. For the development of plant embryos, see Sporophyte.

Embryology (from Greek , embryon, "the unborn, embryo"; and -, -logia) is the branch of biology that studies the development of gametes (sex cells), fertilization, and development of embryos and fetuses. Additionally, embryology is the study of congenital disorders that occur before birth.[1]

After cleavage, the dividing cells, or morula, becomes a hollow ball, or blastula, which develops a hole or pore at one end.

In bilateral animals, the blastula develops in one of two ways that divides the whole animal kingdom into two halves (see: Embryological origins of the mouth and anus). If in the blastula the first pore (blastopore) becomes the mouth of the animal, it is a protostome; if the first pore becomes the anus then it is a deuterostome. The protostomes include most invertebrate animals, such as insects, worms and molluscs, while the deuterostomes include the vertebrates. In due course, the blastula changes into a more differentiated structure called the gastrula.

The gastrula with its blastopore soon develops three distinct layers of cells (the germ layers) from which all the bodily organs and tissues then develop:

Embryos in many species often appear similar to one another in early developmental stages. The reason for this similarity is because species have a shared evolutionary history. These similarities among species are called homologous structures, which are structures that have the same or similar function and mechanism, having evolved from a common ancestor.

Click here to read the main article on Drosophila embryogenesis

Drosophila melanogaster, a fruit fly, is a model organism in biology on which much research into embryology has been done (see figure 1.1.1A and figure 1.1.1B).[2] Before fertilization, the female gamete produces an abundance of mRNA - transcribed from the genes that encode bicoid protein and nanos protein.[3][4] These mRNA molecules are stored to be used later in what will become a developing embryo. The male and female Drosophila gametes exhibit anisogamy (differences in morphology and sub-cellular biochemistry). The female gamete is larger than the male gamete because it harbors more cytoplasm and, within the cytoplasm, the female gamete contains an abundance of the mRNA previously mentioned.[5][6] At fertilization, the male and female gametes fuse (plasmogamy) and then the nucleus of the male gamete fuses with the nucleus of the female gamete (karyogamy). Note that before the gametes' nuclei fuse, they are known as pronuclei.[7] A series of nuclear divisions will occur without cytokinesis (division of the cell) in the zygote to form a multi-nucleated cell (a cell containing multiple nuclei) known as a syncytium.[8][9] All the nuclei in the syncytium are identical, just as all the nuclei in every somatic cell of any multicellular organism are identical in terms of the DNA sequence of the genome.[10] Before the nuclei can differentiate in transcriptional activity, the embryo (syncytium) must be divided into segments. In each segment, a unique set of regulatory proteins will cause specific genes in the nuclei to be transcribed. The resulting combination of proteins will transform clusters of cells into early embryo tissues that will each develop into multiple fetal and adult tissues later in development (note: this happens after each nucleus becomes wrapped with its own cell membrane).

Outlined below is the process that leads to cell and tissue differentiation.

Maternal-effect genes - subject to Maternal (cytoplasmic) inheritance.

Zygotic-effect genes - subject to Mendelian (classical) inheritance.

Humans are bilaterals and deuterostomes.

In humans, the term embryo refers to the ball of dividing cells from the moment the zygote implants itself in the uterus wall until the end of the eighth week after conception. Beyond the eighth week after conception (tenth week of pregnancy), the developing human is then called a fetus.

As recently as the 18th century, the prevailing notion in western human embryology was preformation: the idea that semen contains an embryo a preformed, miniature infant, or homunculus that simply becomes larger during development. The competing explanation of embryonic development was epigenesis, originally proposed 2,000 years earlier by Aristotle. Much early embryology came from the work of the Italian anatomists Aldrovandi, Aranzio, Leonardo da Vinci, Marcello Malpighi, Gabriele Falloppio, Girolamo Cardano, Emilio Parisano, Fortunio Liceti, Stefano Lorenzini, Spallanzani, Enrico Sertoli, and Mauro Rusconi.[22] According to epigenesis, the form of an animal emerges gradually from a relatively formless egg. As microscopy improved during the 19th century, biologists could see that embryos took shape in a series of progressive steps, and epigenesis displaced preformation as the favoured explanation among embryologists.[23]

Karl Ernst von Baer and Heinz Christian Pander proposed the germ layer theory of development; von Baer discovered the mammalian ovum in 1827.[24][25][26] Modern embryological pioneers include Charles Darwin, Ernst Haeckel, J.B.S. Haldane, and Joseph Needham. Other important contributors include William Harvey, Kaspar Friedrich Wolff, Heinz Christian Pander, August Weismann, Gavin de Beer, Ernest Everett Just, and Edward B. Lewis.

After the 1950s, with the DNA helical structure being unravelled and the increasing knowledge in the field of molecular biology, developmental biology emerged as a field of study which attempts to correlate the genes with morphological change, and so tries to determine which genes are responsible for each morphological change that takes place in an embryo, and how these genes are regulated.

Many principles of embryology apply to invertebrates as well as to vertebrates.[27] Therefore, the study of invertebrate embryology has advanced the study of vertebrate embryology. However, there are many differences as well. For example, numerous invertebrate species release a larva before development is complete; at the end of the larval period, an animal for the first time comes to resemble an adult similar to its parent or parents. Although invertebrate embryology is similar in some ways for different invertebrate animals, there are also countless variations. For instance, while spiders proceed directly from egg to adult form, many insects develop through at least one larval stage.

Currently, embryology has become an important research area for studying the genetic control of the development process (e.g. morphogens), its link to cell signalling, its importance for the study of certain diseases and mutations, and in links to stem cell research.

See the original post here:
Embryology - Wikipedia

Biochemistry – Wikipedia

Biochemistry, sometimes called biological chemistry, is the study of chemical processes within and relating to living organisms.[1] By controlling information flow through biochemical signaling and the flow of chemical energy through metabolism, biochemical processes give rise to the complexity of life. Over the last decades of the 20th century, biochemistry has become so successful at explaining living processes that now almost all areas of the life sciences from botany to medicine to genetics are engaged in biochemical research.[2] Today, the main focus of pure biochemistry is on understanding how biological molecules give rise to the processes that occur within living cells,[3] which in turn relates greatly to the study and understanding of tissues, organs, and whole organisms[4]that is, all of biology.

Biochemistry is closely related to molecular biology, the study of the molecular mechanisms by which genetic information encoded in DNA is able to result in the processes of life.[5] Depending on the exact definition of the terms used, molecular biology can be thought of as a branch of biochemistry, or biochemistry as a tool with which to investigate and study molecular biology.

Much of biochemistry deals with the structures, functions and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates and lipids, which provide the structure of cells and perform many of the functions associated with life.[6] The chemistry of the cell also depends on the reactions of smaller molecules and ions. These can be inorganic, for example water and metal ions, or organic, for example the amino acids, which are used to synthesize proteins.[7] The mechanisms by which cells harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition, and agriculture. In medicine, biochemists investigate the causes and cures of diseases.[8] In nutrition, they study how to maintain health and study the effects of nutritional deficiencies.[9] In agriculture, biochemists investigate soil and fertilizers, and try to discover ways to improve crop cultivation, crop storage and pest control.

At its broadest definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life, and the history of biochemistry may therefore go back as far as the ancient Greeks.[10] However, biochemistry as a specific scientific discipline has its beginning sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (today called amylase), in 1833 by Anselme Payen,[11] while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry.[12][13] Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism,[10] or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier.[14][15] Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry, for example Emil Fischer for his work on the chemistry of proteins,[16] and F. Gowland Hopkins on enzymes and the dynamic nature of biochemistry.[17]

The term "biochemistry" itself is derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term (biochemie in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift fr Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study.[18][19] The German chemist Carl Neuberg however is often cited to have coined the word in 1903,[20][21][22] while some credited it to Franz Hofmeister.[23]

It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life.[25] Then, in 1828, Friedrich Whler published a paper on the synthesis of urea, proving that organic compounds can be created artificially.[26] Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy, and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle).

Another significant historic event in biochemistry is the discovery of the gene and its role in the transfer of information in the cell. This part of biochemistry is often called molecular biology.[27] In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin, and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with genetic transfer of information.[28] In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme.[29] In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science.[30] More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi), in the silencing of gene expression.[31]

Around two dozen of the 92 naturally occurring chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals seem to need none. All animals require sodium, but some plants do not. Plants need boron and silicon, but animals may not (or may need ultra-small amounts).

Just six elementscarbon, hydrogen, nitrogen, oxygen, calcium, and phosphorusmake up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more.[32]

The four main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids.[33] Many biological molecules are polymers: in this terminology, monomers are relatively small micromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity.

The function of carbohydrates includes energy storage and providing structure. Sugars are carbohydrates, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications.

The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates, others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits,[34][a] and deoxyribose (C5H10O4).

A monosaccharide can switch from the acyclic (open-chain) form to a cyclic form, through a nucleophilic addition reaction between the carbonyl group and one of the hydroxyls of the same molecule. The reaction creates a ring of carbon atoms closed by one bridging oxygen atom. The resulting molecule has an hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. The reaction is easily reversed, yielding the original open-chain form.[35]

In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring (the same of oxepane), rarely encountered, are called heptoses.

When two monosaccharides undergo dehydration synthesis whereby a molecule of water is released, as two hydrogen atoms and one oxygen atom are lost from the two monosaccharides. The new molecule, consisting of two monosaccharides, is called a disaccharide and is conjoined together by a glycosidic or ether bond. The reverse reaction can also occur, using a molecule of water to split up a disaccharide and break the glycosidic bond; this is termed hydrolysis. The most well-known disaccharide is sucrose, ordinary sugar (in scientific contexts, called table sugar or cane sugar to differentiate it from other sugars). Sucrose consists of a glucose molecule and a fructose molecule joined together. Another important disaccharide is lactose, consisting of a glucose molecule and a galactose molecule. As most humans age, the production of lactase, the enzyme that hydrolyzes lactose back into glucose and galactose, typically decreases. This results in lactase deficiency, also called lactose intolerance.

When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses.[36] Many monosaccharides joined together make a polysaccharide. They can be joined together in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Examples are Cellulose which is an important structural component of plant's cell walls, and glycogen, used as a form of energy storage in animals.

Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety form a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2).

Lipids comprises a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear aliphatic molecules, while others have ring structures. Some are aromatic, while others are not. Some are flexible, while others are rigid.[39]

Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and may be saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain).[40]

Most lipids have some polar character in addition to being largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere -OH (hydroxyl or alcohol). In the case of phospholipids, the polar groups are considerably larger and more polar, as described below.[41]

Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc., are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, which are the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilisers (e.g., in parenteral infusions) or else as drug carrier components (e.g., in a liposome or transfersome).

Proteins are very large molecules macro-biopolymers made from monomers called amino acids. An amino acid consists of a carbon atom bound to four groups. One is an amino group, NH2, and one is a carboxylic acid group, COOH (although these exist as NH3+ and COO under physiologic conditions). The third is a simple hydrogen atom. The fourth is commonly denoted "R" and is different for each amino acid. There are 20 standard amino acids, each containing a carboxyl group, an amino group, and a side-chain (known as an "R" group). The "R" group is what makes each amino acid different, and the properties of the side-chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues.[42]

Some proteins perform largely structural roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of moleculesthey may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. In fact, the enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process, and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole.

The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein simply consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an -helix or into a sheet called a -sheet; some -helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit.[43]

Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine, and then absorbed. They can then be joined to make new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to make all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. These are the essential amino acids, since it is essential to ingest them. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids.

If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an -keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an -keto acid) to another -keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the -keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to make a protein.[44]

A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms, of course, simply release the ammonia into the environment. Likewise, bony fish can release the ammonia into the water where it is quickly diluted. In general, mammals convert the ammonia into urea, via the urea cycle.[45]

In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules.[46] The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function.

Nucleic acids, so called because of its prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses.[2] The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group.[47]

The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA).[48] The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid (similar to a zipper). Adenine binds with thymine and uracil; Thymine binds only with adenine; and cytosine and guanine can bind only with one another.

Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms.[49] Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA.

Glucose is the major energy source in most life forms. For instance, polysaccharides are broken down into their monomers (glycogen phosphorylase removes glucose residues from glycogen). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides.

Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide:oxidised form) to NADH (nicotinamide adenine dinucleotide:reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g., in humans) or to ethanol plus carbon dioxide (e.g., in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway.[50]

In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two more molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle).[51] It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen.

In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle.[52]

Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology and biophysics. There has never been a hard-line among these disciplines in terms of content and technique. Today, the terms molecular biology and biochemistry are nearly interchangeable. The following figure is a schematic that depicts one possible view of the relationship between the fields:

a. ^ Fructose is not the only sugar found in fruits. Glucose and sucrose are also found in varying quantities in various fruits, and indeed sometimes exceed the fructose present. For example, 32% of the edible portion of date is glucose, compared with 23.70% fructose and 8.20% sucrose. However, peaches contain more sucrose (6.66%) than they do fructose (0.93%) or glucose (1.47%).[55]

Go here to see the original:
Biochemistry - Wikipedia

Genetics – Wikipedia

This article is about the general scientific term. For the scientific journal, see Genetics (journal).

Genetics is the study of genes, genetic variation, and heredity in living organisms.[1][2] It is generally considered a field of biology, but it intersects frequently with many of the life sciences and is strongly linked with the study of information systems.

The father of genetics is Gregor Mendel, a late 19th-century scientist and Augustinian friar. Mendel studied 'trait inheritance', patterns in the way traits were handed down from parents to offspring. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene.

Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded beyond inheritance to studying the function and behavior of genes. Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance) and within the context of a population. Genetics has given rise to a number of sub-fields including epigenetics and population genetics. Organisms studied within the broad field span the domain of life, including bacteria, plants, animals, and humans.

Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intra- or extra-cellular environment of a cell or organism may switch gene transcription on or off. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate. While the average height of the two corn stalks may be genetically determined to be equal, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment.

The word genetics stems from the Ancient Greek genetikos meaning "genitive"/"generative", which in turn derives from genesis meaning "origin".[3][4][5]

The observation that living things inherit traits from their parents has been used since prehistoric times to improve crop plants and animals through selective breeding.[6] The modern science of genetics, seeking to understand this process, began with the work of Gregor Mendel in the mid-19th century.[7]

Prior to Mendel, Imre Festetics, a Hungarian noble, who lived in Kszeg before Mendel, was the first who used the word "genetics". He described several rules of genetic inheritance in his work The genetic law of the Nature (Die genetische Gestze der Natur, 1819). His second law is the same as what Mendel published. In his third law, he developed the basic principles of mutation (he can be considered a forerunner of Hugo de Vries.)[8]

Other theories of inheritance preceded his work. A popular theory during Mendel's time was the concept of blending inheritance: the idea that individuals inherit a smooth blend of traits from their parents.[9] Mendel's work provided examples where traits were definitely not blended after hybridization, showing that traits are produced by combinations of distinct genes rather than a continuous blend. Blending of traits in the progeny is now explained by the action of multiple genes with quantitative effects. Another theory that had some support at that time was the inheritance of acquired characteristics: the belief that individuals inherit traits strengthened by their parents. This theory (commonly associated with Jean-Baptiste Lamarck) is now known to be wrongthe experiences of individuals do not affect the genes they pass to their children,[10] although evidence in the field of epigenetics has revived some aspects of Lamarck's theory.[11] Other theories included the pangenesis of Charles Darwin (which had both acquired and inherited aspects) and Francis Galton's reformulation of pangenesis as both particulate and inherited.[12]

Modern genetics started with Gregor Johann Mendel, a scientist and Augustinian friar who studied the nature of inheritance in plants. In his paper "Versuche ber Pflanzenhybriden" ("Experiments on Plant Hybridization"), presented in 1865 to the Naturforschender Verein (Society for Research in Nature) in Brnn, Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically.[13] Although this pattern of inheritance could only be observed for a few traits, Mendel's work suggested that heredity was particulate, not acquired, and that the inheritance patterns of many traits could be explained through simple rules and ratios.

The importance of Mendel's work did not gain wide understanding until the 1890s, after his death, when other scientists working on similar problems re-discovered his research. William Bateson, a proponent of Mendel's work, coined the word genetics in 1905.[14][15] (The adjective genetic, derived from the Greek word genesis, "origin", predates the noun and was first used in a biological sense in 1860.)[16] Bateson both acted as a mentor and was aided significantly by the work of women scientists from Newnham College at Cambridge, specifically the work of Becky Saunders, Nora Darwin Barlow, and Muriel Wheldale Onslow.[17] Bateson popularized the usage of the word genetics to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London, England, in 1906.[18]

After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance. In 1911, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies.[19] In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome.[20]

Although genes were known to exist on chromosomes, chromosomes are composed of both protein and DNA, and scientists did not know which of the two is responsible for inheritance. In 1928, Frederick Griffith discovered the phenomenon of transformation (see Griffith's experiment): dead bacteria could transfer genetic material to "transform" other still-living bacteria. Sixteen years later, in 1944, the AveryMacLeodMcCarty experiment identified DNA as the molecule responsible for transformation.[21] The role of the nucleus as the repository of genetic information in eukaryotes had been established by Hmmerling in 1943 in his work on the single celled alga Acetabularia.[22] The HersheyChase experiment in 1952 confirmed that DNA (rather than protein) is the genetic material of the viruses that infect bacteria, providing further evidence that DNA is the molecule responsible for inheritance.[23]

James Watson and Francis Crick determined the structure of DNA in 1953, using the X-ray crystallography work of Rosalind Franklin and Maurice Wilkins that indicated DNA had a helical structure (i.e., shaped like a corkscrew).[24][25] Their double-helix model had two strands of DNA with the nucleotides pointing inward, each matching a complementary nucleotide on the other strand to form what looks like rungs on a twisted ladder.[26] This structure showed that genetic information exists in the sequence of nucleotides on each strand of DNA. The structure also suggested a simple method for replication: if the strands are separated, new partner strands can be reconstructed for each based on the sequence of the old strand. This property is what gives DNA its semi-conservative nature where one strand of new DNA is from an original parent strand.[27]

Although the structure of DNA showed how inheritance works, it was still not known how DNA influences the behavior of cells. In the following years, scientists tried to understand how DNA controls the process of protein production.[28] It was discovered that the cell uses DNA as a template to create matching messenger RNA, molecules with nucleotides very similar to DNA. The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide sequences and amino acid sequences is known as the genetic code.[29]

With the newfound molecular understanding of inheritance came an explosion of research.[30] A notable theory arose from Tomoko Ohta in 1973 with her amendment to the neutral theory of molecular evolution through publishing the nearly neutral theory of molecular evolution. In this theory, Ohta stressed the importance of natural selection and the environment to the rate at which genetic evolution occurs.[31] One important development was chain-termination DNA sequencing in 1977 by Frederick Sanger. This technology allows scientists to read the nucleotide sequence of a DNA molecule.[32] In 1983, Kary Banks Mullis developed the polymerase chain reaction, providing a quick way to isolate and amplify a specific section of DNA from a mixture.[33] The efforts of the Human Genome Project, Department of Energy, NIH, and parallel private efforts by Celera Genomics led to the sequencing of the human genome in 2003.[34]

At its most fundamental level, inheritance in organisms occurs by passing discrete heritable units, called genes, from parents to progeny.[35] This property was first observed by Gregor Mendel, who studied the segregation of heritable traits in pea plants.[13][36] In his experiments studying the trait for flower color, Mendel observed that the flowers of each pea plant were either purple or whitebut never an intermediate between the two colors. These different, discrete versions of the same gene are called alleles.

In the case of the pea, which is a diploid species, each individual plant has two copies of each gene, one copy inherited from each parent.[37] Many species, including humans, have this pattern of inheritance. Diploid organisms with two copies of the same allele of a given gene are called homozygous at that gene locus, while organisms with two different alleles of a given gene are called heterozygous.

The set of alleles for a given organism is called its genotype, while the observable traits of the organism are called its phenotype. When organisms are heterozygous at a gene, often one allele is called dominant as its qualities dominate the phenotype of the organism, while the other allele is called recessive as its qualities recede and are not observed. Some alleles do not have complete dominance and instead have incomplete dominance by expressing an intermediate phenotype, or codominance by expressing both alleles at once.[38]

When a pair of organisms reproduce sexually, their offspring randomly inherit one of the two alleles from each parent. These observations of discrete inheritance and the segregation of alleles are collectively known as Mendel's first law or the Law of Segregation.

Geneticists use diagrams and symbols to describe inheritance. A gene is represented by one or a few letters. Often a "+" symbol is used to mark the usual, non-mutant allele for a gene.[39]

In fertilization and breeding experiments (and especially when discussing Mendel's laws) the parents are referred to as the "P" generation and the offspring as the "F1" (first filial) generation. When the F1 offspring mate with each other, the offspring are called the "F2" (second filial) generation. One of the common diagrams used to predict the result of cross-breeding is the Punnett square.

When studying human genetic diseases, geneticists often use pedigree charts to represent the inheritance of traits.[40] These charts map the inheritance of a trait in a family tree.

Organisms have thousands of genes, and in sexually reproducing organisms these genes generally assort independently of each other. This means that the inheritance of an allele for yellow or green pea color is unrelated to the inheritance of alleles for white or purple flowers. This phenomenon, known as "Mendel's second law" or the "Law of independent assortment", means that the alleles of different genes get shuffled between parents to form offspring with many different combinations. (Some genes do not assort independently, demonstrating genetic linkage, a topic discussed later in this article.)

Often different genes can interact in a way that influences the same trait. In the Blue-eyed Mary (Omphalodes verna), for example, there exists a gene with alleles that determine the color of flowers: blue or magenta. Another gene, however, controls whether the flowers have color at all or are white. When a plant has two copies of this white allele, its flowers are whiteregardless of whether the first gene has blue or magenta alleles. This interaction between genes is called epistasis, with the second gene epistatic to the first.[41]

Many traits are not discrete features (e.g. purple or white flowers) but are instead continuous features (e.g. human height and skin color). These complex traits are products of many genes.[42] The influence of these genes is mediated, to varying degrees, by the environment an organism has experienced. The degree to which an organism's genes contribute to a complex trait is called heritability.[43] Measurement of the heritability of a trait is relativein a more variable environment, the environment has a bigger influence on the total variation of the trait. For example, human height is a trait with complex causes. It has a heritability of 89% in the United States. In Nigeria, however, where people experience a more variable access to good nutrition and health care, height has a heritability of only 62%.[44]

The molecular basis for genes is deoxyribonucleic acid (DNA). DNA is composed of a chain of nucleotides, of which there are four types: adenine (A), cytosine (C), guanine (G), and thymine (T). Genetic information exists in the sequence of these nucleotides, and genes exist as stretches of sequence along the DNA chain.[45]Viruses are the only exception to this rulesometimes viruses use the very similar molecule, RNA, instead of DNA as their genetic material.[46] Viruses cannot reproduce without a host and are unaffected by many genetic processes, so tend not to be considered living organisms.

DNA normally exists as a double-stranded molecule, coiled into the shape of a double helix. Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G. Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its partner strand. This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by splitting the strands and using each strand as a template for synthesis of a new partner strand.[47]

Genes are arranged linearly along long chains of DNA base-pair sequences. In bacteria, each cell usually contains a single circular genophore, while eukaryotic organisms (such as plants and animals) have their DNA arranged in multiple linear chromosomes. These DNA strands are often extremely long; the largest human chromosome, for example, is about 247 million base pairs in length.[48] The DNA of a chromosome is associated with structural proteins that organize, compact and control access to the DNA, forming a material called chromatin; in eukaryotes, chromatin is usually composed of nucleosomes, segments of DNA wound around cores of histone proteins.[49] The full set of hereditary material in an organism (usually the combined DNA sequences of all chromosomes) is called the genome.

While haploid organisms have only one copy of each chromosome, most animals and many plants are diploid, containing two of each chromosome and thus two copies of every gene.[37] The two alleles for a gene are located on identical loci of the two homologous chromosomes, each allele inherited from a different parent.

Many species have so-called sex chromosomes that determine the gender of each organism.[50] In humans and many other animals, the Y chromosome contains the gene that triggers the development of the specifically male characteristics. In evolution, this chromosome has lost most of its content and also most of its genes, while the X chromosome is similar to the other chromosomes and contains many genes. The X and Y chromosomes form a strongly heterogeneous pair.

When cells divide, their full genome is copied and each daughter cell inherits one copy. This process, called mitosis, is the simplest form of reproduction and is the basis for asexual reproduction. Asexual reproduction can also occur in multicellular organisms, producing offspring that inherit their genome from a single parent. Offspring that are genetically identical to their parents are called clones.

Eukaryotic organisms often use sexual reproduction to generate offspring that contain a mixture of genetic material inherited from two different parents. The process of sexual reproduction alternates between forms that contain single copies of the genome (haploid) and double copies (diploid).[37] Haploid cells fuse and combine genetic material to create a diploid cell with paired chromosomes. Diploid organisms form haploids by dividing, without replicating their DNA, to create daughter cells that randomly inherit one of each pair of chromosomes. Most animals and many plants are diploid for most of their lifespan, with the haploid form reduced to single cell gametes such as sperm or eggs.

Although they do not use the haploid/diploid method of sexual reproduction, bacteria have many methods of acquiring new genetic information. Some bacteria can undergo conjugation, transferring a small circular piece of DNA to another bacterium.[51] Bacteria can also take up raw DNA fragments found in the environment and integrate them into their genomes, a phenomenon known as transformation.[52] These processes result in horizontal gene transfer, transmitting fragments of genetic information between organisms that would be otherwise unrelated.

The diploid nature of chromosomes allows for genes on different chromosomes to assort independently or be separated from their homologous pair during sexual reproduction wherein haploid gametes are formed. In this way new combinations of genes can occur in the offspring of a mating pair. Genes on the same chromosome would theoretically never recombine. However, they do via the cellular process of chromosomal crossover. During crossover, chromosomes exchange stretches of DNA, effectively shuffling the gene alleles between the chromosomes.[53] This process of chromosomal crossover generally occurs during meiosis, a series of cell divisions that creates haploid cells.

The first cytological demonstration of crossing over was performed by Harriet Creighton and Barbara McClintock in 1931. Their research and experiments on corn provided cytological evidence for the genetic theory that linked genes on paired chromosomes do in fact exchange places from one homolog to the other.

The probability of chromosomal crossover occurring between two given points on the chromosome is related to the distance between the points. For an arbitrarily long distance, the probability of crossover is high enough that the inheritance of the genes is effectively uncorrelated.[54] For genes that are closer together, however, the lower probability of crossover means that the genes demonstrate genetic linkage; alleles for the two genes tend to be inherited together. The amounts of linkage between a series of genes can be combined to form a linear linkage map that roughly describes the arrangement of the genes along the chromosome.[55]

Genes generally express their functional effect through the production of proteins, which are complex molecules responsible for most functions in the cell. Proteins are made up of one or more polypeptide chains, each of which is composed of a sequence of amino acids, and the DNA sequence of a gene (through an RNA intermediate) is used to produce a specific amino acid sequence. This process begins with the production of an RNA molecule with a sequence matching the gene's DNA sequence, a process called transcription.

This messenger RNA molecule is then used to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds either to one of the twenty possible amino acids in a protein or an instruction to end the amino acid sequence; this correspondence is called the genetic code.[56] The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNAa phenomenon Francis Crick called the central dogma of molecular biology.[57]

The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of proteins are related to their functions.[58][59] Some are simple structural molecules, like the fibers formed by the protein collagen. Proteins can bind to other proteins and simple molecules, sometimes acting as enzymes by facilitating chemical reactions within the bound molecules (without changing the structure of the protein itself). Protein structure is dynamic; the protein hemoglobin bends into slightly different forms as it facilitates the capture, transport, and release of oxygen molecules within mammalian blood.

A single nucleotide difference within DNA can cause a change in the amino acid sequence of a protein. Because protein structures are the result of their amino acid sequences, some changes can dramatically change the properties of a protein by destabilizing the structure or changing the surface of the protein in a way that changes its interaction with other proteins and molecules. For example, sickle-cell anemia is a human genetic disease that results from a single base difference within the coding region for the -globin section of hemoglobin, causing a single amino acid change that changes hemoglobin's physical properties.[60] Sickle-cell versions of hemoglobin stick to themselves, stacking to form fibers that distort the shape of red blood cells carrying the protein. These sickle-shaped cells no longer flow smoothly through blood vessels, having a tendency to clog or degrade, causing the medical problems associated with this disease.

Some DNA sequences are transcribed into RNA but are not translated into protein productssuch RNA molecules are called non-coding RNA. In some cases, these products fold into structures which are involved in critical cell functions (e.g. ribosomal RNA and transfer RNA). RNA can also have regulatory effects through hybridization interactions with other RNA molecules (e.g. microRNA).

Although genes contain all the information an organism uses to function, the environment plays an important role in determining the ultimate phenotypes an organism displays. This is the complementary relationship often referred to as "nature and nurture". The phenotype of an organism depends on the interaction of genes and the environment. An interesting example is the coat coloration of the Siamese cat. In this case, the body temperature of the cat plays the role of the environment. The cat's genes code for dark hair, thus the hair-producing cells in the cat make cellular proteins resulting in dark hair. But these dark hair-producing proteins are sensitive to temperature (i.e. have a mutation causing temperature-sensitivity) and denature in higher-temperature environments, failing to produce dark-hair pigment in areas where the cat has a higher body temperature. In a low-temperature environment, however, the protein's structure is stable and produces dark-hair pigment normally. The protein remains functional in areas of skin that are colder such as its legs, ears, tail and face so the cat has dark-hair at its extremities.[61]

Environment plays a major role in effects of the human genetic disease phenylketonuria.[62] The mutation that causes phenylketonuria disrupts the ability of the body to break down the amino acid phenylalanine, causing a toxic build-up of an intermediate molecule that, in turn, causes severe symptoms of progressive mental retardation and seizures. However, if someone with the phenylketonuria mutation follows a strict diet that avoids this amino acid, they remain normal and healthy.

A popular method for determining how genes and environment ("nature and nurture") contribute to a phenotype involves studying identical and fraternal twins, or other siblings of multiple births.[63] Because identical siblings come from the same zygote, they are genetically the same. Fraternal twins are as genetically different from one another as normal siblings. By comparing how often a certain disorder occurs in a pair of identical twins to how often it occurs in a pair of fraternal twins, scientists can determine whether that disorder is caused by genetic or postnatal environmental factors whether it has "nature" or "nurture" causes. One famous example is the multiple birth study of the Genain quadruplets, who were identical quadruplets all diagnosed with schizophrenia.[64] However such tests cannot separate genetic factors from environmental factors affecting fetal development.

The genome of a given organism contains thousands of genes, but not all these genes need to be active at any given moment. A gene is expressed when it is being transcribed into mRNA and there exist many cellular methods of controlling the expression of genes such that proteins are produced only when needed by the cell. Transcription factors are regulatory proteins that bind to DNA, either promoting or inhibiting the transcription of a gene.[65] Within the genome of Escherichia coli bacteria, for example, there exists a series of genes necessary for the synthesis of the amino acid tryptophan. However, when tryptophan is already available to the cell, these genes for tryptophan synthesis are no longer needed. The presence of tryptophan directly affects the activity of the genestryptophan molecules bind to the tryptophan repressor (a transcription factor), changing the repressor's structure such that the repressor binds to the genes. The tryptophan repressor blocks the transcription and expression of the genes, thereby creating negative feedback regulation of the tryptophan synthesis process.[66]

Differences in gene expression are especially clear within multicellular organisms, where cells all contain the same genome but have very different structures and behaviors due to the expression of different sets of genes. All the cells in a multicellular organism derive from a single cell, differentiating into variant cell types in response to external and intercellular signals and gradually establishing different patterns of gene expression to create different behaviors. As no single gene is responsible for the development of structures within multicellular organisms, these patterns arise from the complex interactions between many cells.

Within eukaryotes, there exist structural features of chromatin that influence the transcription of genes, often in the form of modifications to DNA and chromatin that are stably inherited by daughter cells.[67] These features are called "epigenetic" because they exist "on top" of the DNA sequence and retain inheritance from one cell generation to the next. Because of epigenetic features, different cell types grown within the same medium can retain very different properties. Although epigenetic features are generally dynamic over the course of development, some, like the phenomenon of paramutation, have multigenerational inheritance and exist as rare exceptions to the general rule of DNA as the basis for inheritance.[68]

During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low1 error in every 10100million basesdue to the "proofreading" ability of DNA polymerases.[69][70] Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure.[71] Chemical damage to DNA occurs naturally as well and cells use DNA repair mechanisms to repair mismatches and breaks. The repair does not, however, always restore the original sequence.

In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations.[72] Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment; this makes some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence duplications, inversions, deletions of entire regions or the accidental exchange of whole parts of sequences between different chromosomes (chromosomal translocation).

Mutations alter an organism's genotype and occasionally this causes different phenotypes to appear. Most mutations have little effect on an organism's phenotype, health, or reproductive fitness.[73] Mutations that do have an effect are usually deleterious, but occasionally some can be beneficial.[74] Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, about 70 percent of these mutations will be harmful with the remainder being either neutral or weakly beneficial.[75]

Population genetics studies the distribution of genetic differences within populations and how these distributions change over time.[76] Changes in the frequency of an allele in a population are mainly influenced by natural selection, where a given allele provides a selective or reproductive advantage to the organism,[77] as well as other factors such as mutation, genetic drift, genetic draft,[78]artificial selection and migration.[79]

Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment.[80] New species are formed through the process of speciation, often caused by geographical separations that prevent populations from exchanging genes with each other.[81] The application of genetic principles to the study of population biology and evolution is known as the "modern synthesis".

By comparing the homology between different species' genomes, it is possible to calculate the evolutionary distance between them and when they may have diverged. Genetic comparisons are generally considered a more accurate method of characterizing the relatedness between species than the comparison of phenotypic characteristics. The evolutionary distances between species can be used to form evolutionary trees; these trees represent the common descent and divergence of species over time, although they do not show the transfer of genetic material between unrelated species (known as horizontal gene transfer and most common in bacteria).[82]

Although geneticists originally studied inheritance in a wide range of organisms, researchers began to specialize in studying the genetics of a particular subset of organisms. The fact that significant research already existed for a given organism would encourage new researchers to choose it for further study, and so eventually a few model organisms became the basis for most genetics research.[83] Common research topics in model organism genetics include the study of gene regulation and the involvement of genes in development and cancer.

Organisms were chosen, in part, for convenienceshort generation times and easy genetic manipulation made some organisms popular genetics research tools. Widely used model organisms include the gut bacterium Escherichia coli, the plant Arabidopsis thaliana, baker's yeast (Saccharomyces cerevisiae), the nematode Caenorhabditis elegans, the common fruit fly (Drosophila melanogaster), and the common house mouse (Mus musculus).

Medical genetics seeks to understand how genetic variation relates to human health and disease.[84] When searching for an unknown gene that may be involved in a disease, researchers commonly use genetic linkage and genetic pedigree charts to find the location on the genome associated with the disease. At the population level, researchers take advantage of Mendelian randomization to look for locations in the genome that are associated with diseases, a method especially useful for multigenic traits not clearly defined by a single gene.[85] Once a candidate gene is found, further research is often done on the corresponding gene the orthologous gene in model organisms. In addition to studying genetic diseases, the increased availability of genotyping methods has led to the field of pharmacogenetics: the study of how genotype can affect drug responses.[86]

Individuals differ in their inherited tendency to develop cancer,[87] and cancer is a genetic disease.[88] The process of cancer development in the body is a combination of events. Mutations occasionally occur within cells in the body as they divide. Although these mutations will not be inherited by any offspring, they can affect the behavior of cells, sometimes causing them to grow and divide more frequently. There are biological mechanisms that attempt to stop this process; signals are given to inappropriately dividing cells that should trigger cell death, but sometimes additional mutations occur that cause cells to ignore these messages. An internal process of natural selection occurs within the body and eventually mutations accumulate within cells to promote their own growth, creating a cancerous tumor that grows and invades various tissues of the body.

Normally, a cell divides only in response to signals called growth factors and stops growing once in contact with surrounding cells and in response to growth-inhibitory signals. It usually then divides a limited number of times and dies, staying within the epithelium where it is unable to migrate to other organs. To become a cancer cell, a cell has to accumulate mutations in a number of genes (37) that allow it to bypass this regulation: it no longer needs growth factors to divide, it continues growing when making contact to neighbor cells, and ignores inhibitory signals, it will keep growing indefinitely and is immortal, it will escape from the epithelium and ultimately may be able to escape from the primary tumor, cross the endothelium of a blood vessel, be transported by the bloodstream and will colonize a new organ, forming deadly metastasis. Although there are some genetic predispositions in a small fraction of cancers, the major fraction is due to a set of new genetic mutations that originally appear and accumulate in one or a small number of cells that will divide to form the tumor and are not transmitted to the progeny (somatic mutations). The most frequent mutations are a loss of function of p53 protein, a tumor suppressor, or in the p53 pathway, and gain of function mutations in the ras proteins, or in other oncogenes.

DNA can be manipulated in the laboratory. Restriction enzymes are commonly used enzymes that cut DNA at specific sequences, producing predictable fragments of DNA.[89] DNA fragments can be visualized through use of gel electrophoresis, which separates fragments according to their length.

The use of ligation enzymes allows DNA fragments to be connected. By binding ("ligating") fragments of DNA together from different sources, researchers can create recombinant DNA, the DNA often associated with genetically modified organisms. Recombinant DNA is commonly used in the context of plasmids: short circular DNA molecules with a few genes on them. In the process known as molecular cloning, researchers can amplify the DNA fragments by inserting plasmids into bacteria and then culturing them on plates of agar (to isolate clones of bacteria cells). ("Cloning" can also refer to the various means of creating cloned ("clonal") organisms.)

DNA can also be amplified using a procedure called the polymerase chain reaction (PCR).[90] By using specific short sequences of DNA, PCR can isolate and exponentially amplify a targeted region of DNA. Because it can amplify from extremely small amounts of DNA, PCR is also often used to detect the presence of specific DNA sequences.

DNA sequencing, one of the most fundamental technologies developed to study genetics, allows researchers to determine the sequence of nucleotides in DNA fragments. The technique of chain-termination sequencing, developed in 1977 by a team led by Frederick Sanger, is still routinely used to sequence DNA fragments.[91] Using this technology, researchers have been able to study the molecular sequences associated with many human diseases.

As sequencing has become less expensive, researchers have sequenced the genomes of many organisms, using a process called genome assembly, which utilizes computational tools to stitch together sequences from many different fragments.[92] These technologies were used to sequence the human genome in the Human Genome Project completed in 2003.[34] New high-throughput sequencing technologies are dramatically lowering the cost of DNA sequencing, with many researchers hoping to bring the cost of resequencing a human genome down to a thousand dollars.[93]

Next generation sequencing (or high-throughput sequencing) came about due to the ever-increasing demand for low-cost sequencing. These sequencing technologies allow the production of potentially millions of sequences concurrently.[94][95] The large amount of sequence data available has created the field of genomics, research that uses computational tools to search for and analyze patterns in the full genomes of organisms. Genomics can also be considered a subfield of bioinformatics, which uses computational approaches to analyze large sets of biological data. A common problem to these fields of research is how to manage and share data that deals with human subject and personally identifiable information. See also genomics data sharing.

On 19 March 2015, a leading group of biologists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited.[96][97][98][99] In April 2015, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.[100][101]

Read more from the original source:
Genetics - Wikipedia

Cell biology – Wikipedia

Cell biology (formerly called cytology, from the Greek , kytos, "vessel") is a branch of biology that studies the different structures and functions of the cell and focuses mainly on the idea of the cell as the basic unit of life. Cell biology explains the structure, organization of the organelles they contain, their physiological properties, metabolic processes, signaling pathways, life cycle, and interactions with their environment. This is done both on a microscopic and molecular level as it encompasses prokaryotic cells and eukaryotic cells. Knowing the components of cells and how cells work is fundamental to all biological sciences; it is also essential for research in bio-medical fields such as cancer, and other diseases. Research in cell biology is closely related to genetics, biochemistry, molecular biology, immunology, and developmental biology.

The study of the cell is done on a molecular level; however, most of the processes within the cell are made up of a mixture of small organic molecules, inorganic ions, hormones, and water. Approximately 75-85% of the cells volume is due to water making it an indispensable solvent as a result of its polarity and structure.[1] These molecules within the cell, which operate as substrates, provide a suitable environment for the cell to carry out metabolic reactions and signalling. The cell shape varies among the different types of organisms, and are thus then classified into two categories: eukaryotes and prokaryotes. In the case of eukaryotic cells - which are made up of animal, plant, fungi, and protozoa cells - the shapes are generally round and spherical,[2] while for prokaryotic cells which are composed of bacteria and archaea - the shapes are: spherical (cocci), rods (bacillus), curved (vibrio), and spirals (spirochetes).[3]

Cell biology focuses more on the study of eukaryotic cells, and their signalling pathways, rather than on prokaryotes which is covered under microbiology. The main constituents of the general molecular composition of the cell includes: proteins and lipids which are either free flowing or membrane bound, along with different internal compartments known as organelles. This environment of the cell is made up of hydrophilic and hydrophobic regions which allows for the exchange of the above-mentioned molecules and ions. The hydrophilic regions of the cell are mainly on the inside and outside of the cell, while the hydrophobic regions are within the phospholipid bilayer of the cell membrane. The cell membrane consists of lipids and proteins which accounts for its hydrophobicity as a result of being non-polar substances.[1] Therefore, in order for these molecules to participate in reactions, within the cell, they need to be able to cross this membrane layer to get into the cell. They accomplish this process of gaining access to the cell via: osmotic pressure, diffusion, concentration gradients, and membrane channels. Inside of the cell are extensive internal sub-cellular membrane-bounded compartments called organelles.

cell surface membrane protects the cell

The growth process of the cell does not refer to the size of the cell, but instead the density of the number of cells present in the organism at a given time. Cell growth pertains to the increase in the number of cells present in an organism as it grows and develops; as the organism gets larger so too does the number of cells present. Cells are the foundation of all organisms, they are the fundamental unit of life. The growth and development of the cell are essential for the maintenance of the host, and survival of the organisms. For this process the cell goes through the steps of the cell cycle and development which involves cell growth, DNA replication, cell division, regeneration, specialization, and cell death. The cell cycle is divided into four distinct phases, G1, S, G2, and M. The G phases which is the cell growth phase - makes up approximately 95% of the cycle.[4] The proliferation of cells is instigated by progenitors, the cells then differentiate to become specialized, where specialized cells of the same type aggregate to form tissues, then organs and ultimately systems.[1] The G phases along with the S phase DNA replication, damage and repair - are considered to be the interphase portion of the cycle. While the M phase (mitosis and cytokinesis) is the cell division portion of the cycle.[4] The cell cycle is regulated by a series of signalling factors and complexes such as CDK's, kinases, and p53. to name a few. When the cell has completed its growth process, and if it is found to be damaged or altered it undergoes cell death, either by apoptosis or necrosis, to eliminate the threat it cause to the organisms survival.

Cells may be observed under the microscope, using several different techniques; these include optical microscopy, transmission electron microscopy, scanning electron microscopy, fluorescence microscopy, and confocal microscopy.

There are several different methods used in the study of cells:

Purification of cells and their parts Purification may be performed using the following methods:

Practical job applications for a degree in Cell Molecular Biology includes the following.[7]

Excerpt from:
Cell biology - Wikipedia

The Neuroscience Of Music | WIRED

Skip Article Header. Skip to: Start of Article.

Why does music make us feel? On the one hand, music is a purely abstract art form, devoid of language or explicit ideas. The stories it tells are all subtlety and subtext. And yet, even though music says little, it still manages to touch us deep, to tickle some universal nerves. When listening to our favorite songs,our body betrays all the symptoms of emotional arousal. The pupils in our eyes dilate, our pulse and blood pressure rise, the electrical conductance of our skin is lowered, and the cerebellum, a brain region associated with bodily movement, becomes strangely active. Blood is even re-directed to the muscles in our legs. (Some speculate that this is why we begin tapping our feet.) In other words,sound stirs us at our biological roots. As Schopenhauer wrote, It is we ourselves who are tortured by the strings.

We can now begin to understand where these feelings come from, why a mass of vibrating air hurtling through space can trigger such intense states of excitement. A brand new paper in Nature Neuroscience by a team of Montreal researchers marks an important step in revealing the precise underpinnings of the potent pleasurable stimulus that is music. Although the study involves plenty of fancy technology, including fMRI and ligand-based positron emission tomography (PET) scanning, the experiment itself was rather straightforward. After screening 217 individuals who responded to advertisements requesting people that experience chills to instrumental music, the scientists narrowed down the subject pool to ten. (These were the lucky few who most reliably got chills.) The scientists then asked the subjects to bring in their playlist of favorite songs virtually every genre was represented, from techno to tango and played them the music while their brain activity was monitored.

Because the scientists were combining methodologies (PET and fMRI) they were able to obtain an impressively precise portrait of music in the brain. The first thing they discovered (using ligand-based PET) is that music triggers the release of dopamine in both the dorsal and ventral striatum. This isnt particularly surprising: these regions have long been associated with the response to pleasurable stimuli.It doesnt matter if were having sex or snorting cocaine or listening to Kanye: These things fill us with bliss because they tickle these cells. Happiness begins here.

The more interesting finding emerged from a close study of the timing of this response, as the scientists looked to see what was happening in the seconds before the subjects got the chills. I wont go into the precise neural correlates lets just say that you should thank your right NAcc the next time you listen to your favorite song but want to instead focus on an interesting distinction observed in the experiment:

In essence, the scientists found that our favorite moments in the music were preceeded by a prolonged increase of activity in the caudate. They call this the anticipatory phase and argue that the purpose of this activity is to help us predict the arrival of our favorite part:

Immediately before the climax of emotional responses there was evidence for relatively greater dopamine activity in the caudate. This subregion of the striatum is interconnected with sensory, motor and associative regions of the brain and has been typically implicated in learning of stimulus-response associations and in mediating the reinforcing qualities of rewarding stimuli such as food.

In other words, the abstract pitches have become a primal reward cue, the cultural equivalent of a bell that makes us drool. Here is their summary:

The anticipatory phase, set off by temporal cues signaling that a potentially pleasurable auditory sequence is coming, can trigger expectations of euphoric emotional states and create a sense of wanting and reward prediction. This reward is entirely abstract and may involve such factors as suspended expectations and a sense of resolution. Indeed, composers and performers frequently take advantage of such phenomena, and manipulate emotional arousal by violating expectations in certain ways or by delaying the predicted outcome (for example, by inserting unexpected notes or slowing tempo) before the resolution to heighten the motivation for completion. The peak emotional response evoked by hearing the desired sequence would represent the consummatory or liking phase, representing fulfilled expectations and accurate reward prediction. We propose that each of these phases may involve dopamine release, but in different subcircuits of the striatum, which have different connectivity and functional roles.

The question, of course, is what all these dopamine neurons are up to. What aspects of music are they responding to? And why are they so active fifteen seconds before the acoustic climax? After all, we typically associate surges of dopamine with pleasure, with the processing of actual rewards. And yet, this cluster of cells in the caudate is most active when the chills have yet to arrive, when the melodic pattern is still unresolved.

One way to answer these questions is to zoom out, to look at the music and not the neuron. While music can often seem (at least to the outsider) like a labyrinth of intricate patterns its art at its most mathematical it turns out that the most important part of every song or symphony is when the patterns break down, when the sound becomes unpredictable. If the music is too obvious, it is annoyingly boring, like an alarm clock. (Numerous studies, after all, have demonstrated that dopamine neurons quickly adapt to predictable rewards. If we know whats going to happen next, then we dont get excited.)This is why composers introduce the tonic note in the beginning of the song and then studiously avoid it until the end. The longer we are denied the pattern we expect, the greater the emotional release when the pattern returns, safe and sound. That is when we get the chills.

To demonstrate this psychological principle, the musicologist Leonard Meyer, in his classic book Emotion and Meaning in Music (1956), analyzed the 5th movement of Beethovens String Quartet in C-sharp minor, Op. 131. Meyer wanted to show how music is defined by its flirtation with but not submission to our expectations of order.To prove his point, Meyer dissected fifty measures of Beethovens masterpiece, showing how Beethoven begins with the clear statement of a rhythmic and harmonic pattern and then, in an intricate tonal dance, carefully avoids repeating it. What Beethoven does instead is suggest variations of the pattern. He is its evasive shadow. If E major is the tonic, Beethoven will play incomplete versions of the E major chord, always careful to avoid its straight expression. He wants to preserve an element of uncertainty in his music, making our brains beg for the one chord he refuses to give us. Beethoven saves that chord for the end.

According to Meyer, it is the suspenseful tension of music (arising out of our unfulfilled expectations) that is the source of the musics feeling. While earlier theories of music focused on the way a noise can refer to the real world of images and experiences (its connotative meaning), Meyer argued that the emotions we find in music come from the unfolding events of the music itself. This embodied meaning arises from the patterns the symphony invokes and then ignores, from the ambiguity it creates inside its own form. For the human mind, Meyer writes, such states of doubt and confusion are abhorrent. When confronted with them, the mind attempts to resolve them into clarity and certainty.And so we wait, expectantly, for the resolution of E major, for Beethovens established pattern to be completed. This nervous anticipation, says Meyer, is the whole raison detre of the passage, for its purpose is precisely to delay the cadence in the tonic.The uncertainty makes the feeling it is what triggers that surge of dopamine in the caudate, as we struggle to figure out what will happen next. And so our neurons search for the undulating order, trying to make sense of this flurry of pitches. We can predict some of the notes, but we cant predict them all, and that is what keeps us listening, waiting expectantly for our reward, for the errant pattern to be completed. Music is a form whose meaning depends upon its violation.

Homepage image: Kashirin Nickolai, Flickr.

Read the original:
The Neuroscience Of Music | WIRED

Psychology – Wikipedia, the free encyclopedia

Psychology is the study of behavior and mind, embracing all aspects of conscious and unconscious experience as well as thought. It is an academic discipline and a social science which seeks to understand individuals and groups by establishing general principles and researching specific cases.[1][2] In this field, a professional practitioner or researcher is called a psychologist and can be classified as a social, behavioral, or cognitive scientist. Psychologists attempt to understand the role of mental functions in individual and social behavior, while also exploring the physiological and biological processes that underlie cognitive functions and behaviors.

Psychologists explore behavior and mental processes, including perception, cognition, attention, emotion (affect), intelligence, phenomenology, motivation (conation), brain functioning, and personality. This extends to interaction between people, such as interpersonal relationships, including psychological resilience, family resilience, and other areas. Psychologists of diverse orientations also consider the unconscious mind.[3] Psychologists employ empirical methods to infer causal and correlational relationships between psychosocial variables. In addition, or in opposition, to employing empirical and deductive methods, someespecially clinical and counseling psychologistsat times rely upon symbolic interpretation and other inductive techniques. Psychology has been described as a "hub science",[4] with psychological findings linking to research and perspectives from the social sciences, natural sciences, medicine, humanities, and philosophy.

While psychological knowledge is often applied to the assessment and treatment of mental health problems, it is also directed towards understanding and solving problems in several spheres of human activity. By many accounts psychology ultimately aims to benefit society.[5][6] The majority of psychologists are involved in some kind of therapeutic role, practicing in clinical, counseling, or school settings. Many do scientific research on a wide range of topics related to mental processes and behavior, and typically work in university psychology departments or teach in other academic settings (e.g., medical schools, hospitals). Some are employed in industrial and organizational settings, or in other areas[7] such as human development and aging, sports, health, and the media, as well as in forensic investigation and other aspects of law.

The word psychology derives from Greek roots meaning study of the psyche, or soul ( psukh, "breath, spirit, soul" and - -logia, "study of" or "research").[8] The Latin word psychologia was first used by the Croatian humanist and Latinist Marko Maruli in his book, Psichiologia de ratione animae humanae in the late 15th century or early 16th century.[9] The earliest known reference to the word psychology in English was by Steven Blankaart in 1694 in The Physical Dictionary which refers to "Anatomy, which treats the Body, and Psychology, which treats of the Soul."[10]

In 1890, William James defined psychology as "the science of mental life, both of its phenomena and their conditions". This definition enjoyed widespread currency for decades. However, this meaning was contested, notably by radical behaviorists such as John Watson, who in his 1913 manifesto defined the discipline of psychology as the acquisition of information useful to the control of behavior. Also since James defined it, the term more strongly connotes techniques of scientific experimentation.[11][12]Folk psychology refers to the understanding of ordinary people, as contrasted with that of psychology professionals.[13]

The ancient civilizations of Egypt, Greece, China, India, and Persia all engaged in the philosophical study of psychology. Historians note that Greek philosophers, including Thales, Plato, and Aristotle (especially in his De Anima treatise),[14] addressed the workings of the mind.[15] As early as the 4th century BC, Greek physician Hippocrates theorized that mental disorders had physical rather than supernatural causes.[16]

In China, psychological understanding grew from the philosophical works of Laozi and Confucius, and later from the doctrines of Buddhism. This body of knowledge involves insights drawn from introspection and observation, as well as techniques for focused thinking and acting. It frames the universe as a division of, and interaction between, physical reality and mental reality, with an emphasis on purifying the mind in order to increase virtue and power. An ancient text known as The Yellow Emperor's Classic of Internal Medicine identifies the brain as the nexus of wisdom and sensation, includes theories of personality based on yinyang balance, and analyzes mental disorder in terms of physiological and social disequilibria. Chinese scholarship focused on the brain advanced in the Qing Dynasty with the work of Western-educated Fang Yizhi (16111671), Liu Zhi (16601730), and Wang Qingren (17681831). Wang Qingren emphasized the importance of the brain as the center of the nervous system, linked mental disorder with brain diseases, investigated the causes of dreams and insomnia, and advanced a theory of hemispheric lateralization in brain function.[17]

Distinctions in types of awareness appear in the ancient thought of India, influenced by Hinduism. A central idea of the Upanishads is the distinction between a person's transient mundane self and their eternal unchanging soul. Divergent Hindu doctrines, and Buddhism, have challenged this hierarchy of selves, but have all emphasized the importance of reaching higher awareness. Yoga is a range of techniques used in pursuit of this goal. Much of the Sanskrit corpus was suppressed under the British East India Company followed by the British Raj in the 1800s. However, Indian doctrines influenced Western thinking via the Theosophical Society, a New Age group which became popular among Euro-American intellectuals.[18]

Psychology was a popular topic in Enlightenment Europe. In Germany, Gottfried Wilhelm Leibniz (16461716) applied his principles of calculus to the mind, arguing that mental activity took place on an indivisible continuummost notably, that among an infinity of human perceptions and desires, the difference between conscious and unconscious awareness is only a matter of degree. Christian Wolff identified psychology as its own science, writing Psychologia empirica in 1732 and Psychologia rationalis in 1734. This notion advanced further under Immanuel Kant, who established the idea of anthropology, with psychology as an important subdivision. However, Kant explicitly and notoriously rejected the idea of experimental psychology, writing that "the empirical doctrine of the soul can also never approach chemistry even as a systematic art of analysis or experimental doctrine, for in the manifold of inner observation can be separated only by mere division in thought, and cannot then be held separate and recombined at will (but still less does another thinking subject suffer himself to be experimented upon to suit our purpose), and even observation by itself already changes and displaces the state of the observed object." Having consulted philosophers Hegel and Herbart, in 1825 the Prussian state established psychology as a mandatory discipline in its rapidly expanding and highly influential educational system. However, this discipline did not yet embrace experimentation.[19] In England, early psychology involved phrenology and the response to social problems including alcoholism, violence, and the country's well-populated mental asylums.[20]

Gustav Fechner began conducting psychophysics research in Leipzig in the 1830s, articulating the principle that human perception of a stimulus varies logarithmically according to its intensity.[21] Fechner's 1860 Elements of Psychophysics challenged Kant's stricture against quantitative study of the mind.[19] In Heidelberg, Hermann von Helmholtz conducted parallel research on sensory perception, and trained physiologist Wilhelm Wundt. Wundt, in turn, came to Leipzig University, establishing the psychological laboratory which brought experimental psychology to the world. Wundt focused on breaking down mental processes into the most basic components, motivated in part by an analogy to recent advances in chemistry, and its successful investigation of the elements and structure of material.[22]Paul Flechsig and Emil Kraepelin soon created another influential psychology laboratory at Leipzig, this one focused on more on experimental psychiatry.[19]

Psychologists in Germany, Denmark, Austria, England, and the United States soon followed Wundt in setting up laboratories.[23]G. Stanley Hall who studied with Wundt, formed a psychology lab at Johns Hopkins University in Maryland, which became internationally influential. Hall, in turn, trained Yujiro Motora, who brought experimental psychology, emphasizing psychophysics, to the Imperial University of Tokyo.[24] Wundt assistant Hugo Mnsterberg taught psychology at Harvard to students such as Narendra Nath Sen Guptawho, in 1905, founded a psychology department and laboratory at the University of Calcutta.[18] Wundt students Walter Dill Scott, Lightner Witmer, and James McKeen Cattell worked on developing tests for mental ability. Catell, who also studied with eugenicist Francis Galton, went on to found the Psychological Corporation. Wittmer focused on mental testing of children; Scott, on selection of employees.[25]

Another student of Wundt, Edward Titchener, created the psychology program at Cornell University and advanced a doctrine of "structuralist" psychology. Structuralism sought to analyze and classify different aspects of the mind, primarily through the method of introspection.[26]William James, John Dewey and Harvey Carr advanced a more expansive doctrine called functionalism, attuned more to humanenvironment actions. In 1890 James wrote an influential book, The Principles of Psychology, which expanded on the realm of structuralism, memorably described the human "stream of consciousness", and interested many American students in the emerging discipline.[26][27][28] Dewey integrated psychology with social issues, most notably by promoting the cause progressive education to assimilate immigrants and inculcate moral values in children.[29]

A different strain of experimentalism, with more connection to physiology, emerged in South America, under the leadership of Horacio G. Piero at the University of Buenos Aires.[30] Russia, too, placed greater emphasis on the biological basis for psychology, beginning with Ivan Sechenov's 1873 essay, "Who Is to Develop Psychology and How?" Sechenov advanced the idea of brain reflexes and aggressively promoted a deterministic viewpoint on human behavior.[31]

Wolfgang Kohler, Max Wertheimer and Kurt Koffka co-founded the school of Gestalt psychology (not to be confused with the Gestalt therapy of Fritz Perls). This approach is based upon the idea that individuals experience things as unified wholes. Rather than breaking down thoughts and behavior into smaller elements, as in structuralism, the Gestaltists maintained that whole of experience is important, and differs from the sum of its parts. Other 19th-century contributors to the field include the German psychologist Hermann Ebbinghaus, a pioneer in the experimental study of memory, who developed quantitative models of learning and forgetting at the University of Berlin,[32] and the Russian-Soviet physiologist Ivan Pavlov, who discovered in dogs a learning process that was later termed "classical conditioning" and applied to human beings.[33]

One of the earliest psychology societies was La Socit de Psychologie Physiologique in France, which lasted 18851893. The first meeting of the International Congress of Psychology took place in Paris, in August 1889, amidst the World's Fair celebrating the centennial of the French Revolution. William James was one of three Americans among the four hundred attendees. The American Psychological Association was founded soon after, in 1892. The International Congress continued to be held, at different locations in Europe, with wider international participation. The Sixth Congress, Geneva 1909, included presentations in Russian, Chinese, and Japanese, as well as Esperanto. After a hiatus for World War I, the Seventh Congress met in Oxford, with substantially greater participation from the war-victorious Anglo-Americans. In 1929, the Congress took place at Yale University in New Haven, Connecticut, attended by hundreds of members of the American Psychological Association[23] Tokyo Imperial University led the way in bringing the new psychology to the East, and from Japan these ideas diffused into China.[17][24]

American psychology gained status during World War I, during which a standing committee headed by Robert Yerkes administered mental tests ("Army Alpha" and "Army Beta") to almost 1.8 million GIs.[34] Subsequent funding for behavioral research came in large part from the Rockefeller family, via the Social Science Research Council.[35][36] Rockefeller charities funded the National Committee on Mental Hygiene, which promoted the concept of mental illness and lobbied for psychological supervision of child development.[34][37] Through the Bureau of Social Hygiene and later funding of Alfred Kinsey, Rockefeller foundations established sex research as a viable discipline in the U.S.[38] Under the influence of the Carnegie-funded Eugenics Record Office, the Draper-funded Pioneer Fund, and other institutions, the eugenics movement also had a significant impact on American psychology; in the 1910s and 1920s, eugenics became a standard topic in psychology classes.[39]

During World War II and the Cold War, the U.S. military and intelligence agencies established themselves as leading funders of psychologythrough the armed forces and in the new Office of Strategic Services intelligence agency. University of Michigan psychologist Dorwin Cartwright reported that university researchers began large-scale propaganda research in 19391941, and "the last few months of the war saw a social psychologist become chiefly responsible for determining the week-by-week-propaganda policy for the United States Government." Cartwright also wrote that psychologists had significant roles in managing the domestic economy.[40] The Army rolled out its new General Classification Test and engaged in massive studies of troop morale. In the 1950s, the Rockefeller Foundation and Ford Foundation collaborated with the Central Intelligence Agency to fund research on psychological warfare.[41] In 1965, public controversy called attention to the Army's Project Camelotthe "Manhattan Project" of social sciencean effort which enlisted psychologists and anthropologists to analyze foreign countries for strategic purposes.[42][43]

In Germany after World War I, psychology held institutional power through the military, and subsequently expanded along with the rest of the military under the Third Reich.[19] Under the direction of Hermann Gring's cousin Matthias Gring, the Berlin Psychoanalytic Institute was renamed the Gring Institute. Freudian psychoanalysts were expelled and persecuted under the anti-Jewish policies of the Nazi Party, and all psychologists had to distance themselves from Freud and Adler.[44] The Gring Institute was well-financed throughout the war with a mandate to create a "New German Psychotherapy". This psychotherapy aimed to align suitable Germans with the overall goals of the Reich; as described by one physician: "Despite the importance of analysis, spiritual guidance and the active cooperation of the patient represent the best way to overcome individual mental problems and to subordinate them to the requirements of the Volk and the Gemeinschaft." Psychologists were to provide Seelenfhrung, leadership of the mind, to integrate people into the new vision of a German community.[45]Harald Schultz-Hencke melded psychology with the Nazi theory of biology and racial origins, criticizing psychoanalysis as a study of the weak and deformed.[46]Johannes Heinrich Schultz, a German psychologist recognized for developing the technique of autogenic training, prominently advocated sterilization and euthanasia of men considered genetically undesirable, and devised techniques for facilitating this process.[47] After the war, some new institutions were created and some psychologists were discredited due to Nazi affiliation. Alexander Mitscherlich founded a prominent applied psychoanalysis journal called Psyche and with funding from the Rockefeller Foundation established the first clinical psychosomatic medicine division at Heidelberg University. In 1970, psychology was integrated into the required studies of medical students.[48]

After the Russian Revolution, psychology was heavily promoted by the Bolsheviks as a way to engineer the "New Man" of socialism. Thus, university psychology departments trained large numbers of students, for whom positions were made available at schools, workplaces, cultural institutions, and in the military. An especial focus was pedology, the study of child development, regarding which Lev Vygotsky became a prominent writer.[31] The Bolsheviks also promoted free love and embranced the doctrine of psychoanalysis as an antidote to sexual repression.[49] Although pedology and intelligence testing fell out of favor in 1936, psychology maintained its privileged position as an instrument of the Soviet state.[31] Stalinist purges took a heavy toll and instilled a climate of fear in the profession, as elsewhere in Soviet society.[50] Following World War II, Jewish psychologists past and present (including Vygotsky, A. R. Luria, and Aron Zalkind) were denounced; Ivan Pavlov (posthumously) and Stalin himself were aggrandized as heroes of Soviet psychology.[51] Soviet academics was speedily liberalized during the Khrushchev Thaw, and cybernetics, linguistics, genetics, and other topics became acceptable again. There emerged a new field called "engineering psychology" which studied mental aspects of complex jobs (such as pilot and cosmonaut). Interdisciplinary studies became popular and scholars such as Georgy Shchedrovitsky developed systems theory approaches to human behavior.[52]

Twentieth-century Chinese psychology originally modeled the United States, with translations from American authors like William James, the establishment of university psychology departments and journals, and the establishment of groups including the Chinese Association of Psychological Testing (1930) and the Chinese Psychological Society (1937). Chinese psychologists were encouraged to focus on education and language learning, with the aspiration that education would enable modernization and nationalization. John Dewey, who lectured to Chinese audiences in 19181920, had a significant influence on this doctrine. Chancellor T'sai Yuan-p'ei introduced him at Peking University as a greater thinker than Confucius. Kuo Zing-yang who received a PhD at the University of California, Berkeley, became President of Zhejiang University and popularized behaviorism.[53] After the Chinese Communist Party gained control of the country, the Stalinist USSR became the leading influence, with MarxismLeninism the leading social doctrine and Pavlovian conditioning the approved concept of behavior change. Chinese psychologists elaborated on Lenin's model of a "reflective" consciousness, envisioning an "active consciousness" (tzu-chueh neng-tung-li) able to transcend material conditions through hard work and ideological struggle. They developed a concept of "recognition" (jen-shih) which referred the interface between individual perceptions and the socially accepted worldview. (Failure to correspond with party doctrine was "incorrect recognition".)[54] Psychology education was centralized under the Chinese Academy of Sciences, supervised by the State Council. In 1951 the Academy created a Psychology Research Office, which in 1956 became the Institute of Psychology. Most leading psychologists were educated in the United States, and the first concern of the Academy was re-education of these psychologists in the Soviet doctrines. Child psychology and pedagogy for nationally cohesive education remained a central goal of the discipline.[55]

In 1920, douard Claparde and Pierre Bovet created a new applied psychology organization called the International Congress of Psychotechnics Applied to Vocational Guidance, later called the International Congress of Psychotechnics and then the International Association of Applied Psychology.[23] The IAAP is considered the oldest international psychology association.[56] Today, at least 65 international groups deal with specialized aspects of psychology.[56] In response to male predominance in the field, female psychologists in the U.S. formed National Council of Women Psychologists in 1941. This organization became the International Council of Women Psychologists after World War II, and the International Council of Psychologists in 1959. Several associations including the Association of Black Psychologists and the Asian American Psychological Association have arisen to promote non-European racial groups in the profession.[56]

The world federation of national psychological societies is the International Union of Psychological Science (IUPsyS), founded in 1951 under the auspices of UNESCO, the United Nations cultural and scientific authority.[23][57] Psychology departments have since proliferated around the world, based primarily on the Euro-American model.[18][57] Since 1966, the Union has published the International Journal of Psychology.[23] IAAP and IUPsyS agreed in 1976 each to hold a congress every four years, on a staggered basis.[56]

The International Union recognizes 66 national psychology associations and at least 15 others exist.[56] The American Psychological Association is the oldest and largest.[56] Its membership has increased from 5,000 in 1945 to 100,000 in the present day.[26] The APA includes 54 divisions, which since 1960 have steadily proliferated to include more specialties. Some of these divisions, such as the Society for the Psychological Study of Social Issues and the American PsychologyLaw Society, began as autonomous groups.[56]

The Interamerican Society of Psychology, founded in 1951, aspires to promote psychology and coordinate psychologists across the Western Hemisphere. It holds the Interamerican Congress of Psychology and had 1000 members in year 2000. The European Federation of Professional Psychology Associations, founded in 1981, represents 30 national associations with a total of 100,000 individual members. At least 30 other international groups organize psychologists in different regions.[56]

In some places, governments legally regulate who can provide psychological services or represent themselves as a "psychologist".[58] The American Psychological Association defines a psychologist as someone with a doctoral degree in psychology.[59]

Early practitioners of experimental psychology distinguished themselves from parapsychology, which in the late nineteenth century enjoyed great popularity (including the interest of scholars such as William James), and indeed constituted the bulk of what people called "psychology". Parapsychology, hypnotism, and psychism were major topics of the early International Congresses. But students of these fields were eventually ostractized, and more or less banished from the Congress in 19001905.[23] Parapsychology persisted for a time at Imperial University, with publications such as Clairvoyance and Thoughtography by Tomokichi Fukurai, but here too it was mostly shunned by 1913.[24]

As a discipline, psychology has long sought to fend off accusations that it is a "soft" science. Philosopher of science Thomas Kuhn's 1962 critique implied psychology overall was in a pre-paradigm state, lacking the agreement on overarching theory found in mature sciences such as chemistry and physics.[60] Because some areas of psychology rely on research methods such as surveys and questionnaires, critics asserted that psychology is not an objective science. Skeptics have suggested that personality, thinking, and emotion, cannot be directly measured and are often inferred from subjective self-reports, which may be problematic. Experimental psychologists have devised a variety of ways to indirectly measure these elusive phenomenological entities.[61][62][63]

Divisions still exist within the field, with some psychologists more oriented towards the unique experiences of individual humans, which cannot be understood only as data points within a larger population. Critics inside and outside the field have argued that mainstream psychology has become increasingly dominated by a "cult of empiricism" which limits the scope of its study by using only methods derived from the physical sciences.[64] Feminist critiques along these lines have argued that claims to scientific objectivity obscure the values and agenda of (historically mostly male)[34] researchers. Jean Grimshaw, for example, argues that mainstream psychological research has advanced a patriarchal agenda through its efforts to control behavior.[65]

Psychologists generally consider the organism the basis of the mind, and therefore a vitally related area of study. Psychiatrists and neuropsychologists work at the interface of mind and body.[66] Biological psychology, also known as physiological psychology,[67] or neuropsychology is the study of the biological substrates of behavior and mental processes. Key research topics in this field include comparative psychology, which studies humans in relation to other animals, and perception which involves the physical mechanics of sensation as well as neural and mental processing.[68] For centuries, a leading question in biological psychology has been whether and how mental functions might be localized in the brain. From Phineas Gage to H. M. and Clive Wearing, individual people with mental issues traceable to physical damage have inspired new discoveries in this area.[67] Modern neuropsychology could be said to originate in the 1870s, when in France Paul Broca traced production of speech to the left frontal gyrus, thereby also demonstrating hemispheric lateralization of brain function. Soon after, Carl Wernicke identified a related area necessary for the understanding of speech.[69]

The contemporary field of behavioral neuroscience focuses on physical causes underpinning behavior. For example, physiological psychologists use animal models, typically rats, to study the neural, genetic, and cellular mechanisms that underlie specific behaviors such as learning and memory and fear responses.[70]Cognitive neuroscientists investigate the neural correlates of psychological processes in humans using neural imaging tools, and neuropsychologists conduct psychological assessments to determine, for instance, specific aspects and extent of cognitive deficit caused by brain damage or disease. The biopsychosocial model is an integrated perspective toward understanding consciousness, behavior, and social interaction. It assumes that any given behavior or mental process affects and is affected by dynamically interrelated biological, psychological, and social factors.[71]

Evolutionary psychology examines cognition and personality traits from an evolutionary perspective. This perspective suggests that psychological adaptations evolved to solve recurrent problems in human ancestral environments. Evolutionary psychology offers complementary explanations for the mostly proximate or developmental explanations developed by other areas of psychology: that is, it focuses mostly on ultimate or "why?" questions, rather than proximate or "how?" questions. "How?" questions are more directly tackled by behavioral genetics research, which aims to understand how genes and environment impact behavior.[72]

The search for biological origins of psychological phenomena has long involved debates about the importance of race, and especially the relationship between race and intelligence. The idea of white supremacy and indeed the modern concept of race itself arose during the process of world conquest by Europeans.[73]Carl von Linnaeus's four-fold classification of humans classifies Europeans as intelligent and severe, Americans as contented and free, Asians as ritualistic, and Africans as lazy and capricious. Race was also used to justify the construction of socially specific mental disorders such as drapetomania and dysaesthesia aethiopicathe behavior of uncooperative African slaves.[74] After the creation of experimental psychology, "ethnical psychology" emerged as a subdiscipline, based on the assumption that studying primitive races would provide an important link between animal behavior and the psychology of more evolved humans.[75]

Psychologists take human behavior as a main area of study. Much of the research in this area began with tests on mammals, based on the idea that humans exhibit similar fundamental tendencies. Behavioral research ever aspires to improve the effectiveness of techniques for behavior modification.

Early behavioral researchers studied stimulusresponse pairings, now known as classical conditioning. They demonstrated that behaviors could be linked through repeated association with stimuli eliciting pain or pleasure. Ivan Pavlovknown best for inducing dogs to salivate in the presence of a stimulus previous linked with foodbecame a leading figure in the Soviet Union and inspired followers to use his methods on humans.[31] In the United States, Edward Lee Thorndike initiated "connectionism" studies by trapping animals in "puzzle boxes" and rewarding them for escaping. Thorndike wrote in 1911: "There can be no moral warrant for studying man's nature unless the study will enable us to control his acts."[76] From 19101913 the American Psychological Association went through a sea change of opinion, away from mentalism and towards "behavioralism", and in 1913 John B. Watson coined the term behaviorism for this school of thought.[77] Watson's famous Little Albert experiment in 1920 demonstrated that repeated use of upsetting loud noises could instill phobias (aversions to other stimuli) in an infant human.[12][78]Karl Lashley, a close collaborator with Watson, examined biological manifestations of learning in the brain.[67]

Embraced and extended by Clark L. Hull, Edwin Guthrie, and others, behaviorism became a widely used research paradigm.[26] A new method of "instrumental" or "operant" conditioning added the concepts of reinforcement and punishment to the model of behavior change. Radical behaviorists avoided discussing the inner workings of the mind, especially the unconscious mind, which they considered impossible to assess scientifically.[79] Operant conditioning was first described by Miller and Kanorski and popularized in the U.S. by B. F. Skinner, who emerged as a leading intellectual of the behaviorist movement.[80][81]

Noam Chomsky delivered an influential critique of radical behaviorism on the grounds that it could not adequately explain the complex mental process of language acquisition.[82][83][84]Martin Seligman and colleagues discovered that the conditioning of dogs led to outcomes ("learned helplessness") that opposed the predictions of behaviorism.[85][86] Skinner's behaviorism did not die, perhaps in part because it generated successful practical applications.[82]Edward C. Tolman advanced a hybrid "cognitive behaviorial" model, most notably with his 1948 publication discussing the cognitive maps used by rats to guess at the location of food at the end of a modified maze.[87]

The Association for Behavior Analysis International was founded in 1974 and by 2003 had members from 42 countries. The field has been especially influential in Latin America, where it has a regional organization known as ALAMOC: La Asociacin Latinoamericana de Anlisis y Modificacin del Comportamiento. Behaviorism also gained a strong foothold in Japan, where it gave rise to the Japanese Society of Animal Psychology (1933), the Japanese Association of Special Education (1963), the Japanese Society of Biofeedback Research (1973), the Japanese Association for Behavior Therapy (1976), the Japanese Association for Behavior Analysis (1979), and the Japanese Association for Behavioral Science Research (1994).[88] Today the field of behaviorism is also commonly referred to as behavior modification or behavior analysis.[88]

Green Red Blue Purple Blue Purple

Blue Purple Red Green Purple Green

The Stroop effect refers to the fact that naming the color of the first set of words is easier and quicker than the second.

Cognitive psychology studies cognition, the mental processes underlying mental activity. Perception, attention, reasoning, thinking, problem solving, memory, learning, language, and emotion are areas of research. Classical cognitive psychology is associated with a school of thought known as cognitivism, whose adherents argue for an information processing model of mental function, informed by functionalism and experimental psychology.

On a broader level, cognitive science is an interdisciplinary enterprise of cognitive psychologists, cognitive neuroscientists, researchers in artificial intelligence, linguists, humancomputer interaction, computational neuroscience, logicians and social scientists. Computer simulations are sometimes used to model phenomena of interest.

Starting in the 1950s, the experimental techniques developed by Wundt, James, Ebbinghaus, and others re-emerged as experimental psychology became increasingly cognitivistconcerned with information and its processingand, eventually, constituted a part of the wider cognitive science.[89] Some called this development the cognitive revolution because it rejected the anti-mentalist dogma of behaviorism as well as the strictures of psychoanalysis.[89]

Social learning theorists, such as Albert Bandura, argued that the child's environment could make contributions of its own to the behaviors of an observant subject.[90]

Technological advances also renewed interest in mental states and representations. English neuroscientist Charles Sherrington and Canadian psychologist Donald O. Hebb used experimental methods to link psychological phenomena with the structure and function of the brain. The rise of computer science, cybernetics and artificial intelligence suggested the value of comparatively studying information processing in humans and machines. Research in cognition had proven practical since World War II, when it aided in the understanding of weapons operation.[91]

A popular and representative topic in this area is cognitive bias, or irrational thought. Psychologists (and economists) have classified and described a sizeable catalogue of biases which recur frequently in human thought. The availability heuristic, for example, is the tendency to overestimate the importance of something which happens to come readily to mind.

Elements of behaviorism and cognitive psychology were synthesized to form cognitive behavioral therapy, a form of psychotherapy modified from techniques developed by American psychologist Albert Ellis and American psychiatrist Aaron T. Beck. Cognitive psychology was subsumed along with other disciplines, such as philosophy of mind, computer science, and neuroscience, under the cover discipline of cognitive science.

Social psychology is the study of how humans think about each other and how they relate to each other. Social psychologists study such topics as the influence of others on an individual's behavior (e.g. conformity, persuasion), and the formation of beliefs, attitudes, and stereotypes about other people. Social cognition fuses elements of social and cognitive psychology in order to understand how people process, remember, or distort social information. The study of group dynamics reveals information about the nature and potential optimization of leadership, communication, and other phenomena that emerge at least at the microsocial level. In recent years, many social psychologists have become increasingly interested in implicit measures, mediational models, and the interaction of both person and social variables in accounting for behavior. The study of human society is therefore a potentially valuable source of information about the causes of psychiatric disorder. Some sociological concepts applied to psychiatric disorders are the social role, sick role, social class, life event, culture, migration, social, and total institution.

Psychoanalysis comprises a method of investigating the mind and interpreting experience; a systematized set of theories about human behavior; and a form of psychotherapy to treat psychological or emotional distress, especially conflict originating in the unconscious mind.[92] This school of thought originated in the 1890s with Austrian medical doctors including Josef Breuer (physician), Alfred Adler (physician), Otto Rank (psychoanalyst), and most prominently Sigmund Freud (neurologist). Freud's psychoanalytic theory was largely based on interpretive methods, introspection and clinical observations. It became very well known, largely because it tackled subjects such as sexuality, repression, and the unconscious. These subjects were largely taboo at the time, and Freud provided a catalyst for their open discussion in polite society.[49] Clinically, Freud helped to pioneer the method of free association and a therapeutic interest in dream interpretation.[93][94]

Swiss psychiatrist Carl Jung, influenced by Freud, elaborated a theory of the collective unconsciousa primordial force present in all humans, featuring archetypes which exerted a profound influence on the mind. Jung's competing vision formed the basis for analytical psychology, which later led to the archetypal and process-oriented schools. Other well-known psychoanalytic scholars of the mid-20th century include Erik Erikson, Melanie Klein, D. W. Winnicott, Karen Horney, Erich Fromm, John Bowlby, and Sigmund Freud's daughter, Anna Freud. Throughout the 20th century, psychoanalysis evolved into diverse schools of thought which could be called Neo-Freudian. Among these schools are ego psychology, object relations, and interpersonal, Lacanian, and relational psychoanalysis.

Psychologists such as Hans Eysenck and philosophers including Karl Popper criticized psychoanalysis. Popper argued that psychoanalysis had been misrepresented as a scientific discipline,[95] whereas Eysenck said that psychoanalytic tenets had been contradicted by experimental data. By the end of 20th century, psychology departments in American universities mostly marginalized Freudian theory, dismissing it as a "desiccated and dead" historical artifact.[96] However, researchers in the emerging field of neuro-psychoanalysis today defend some of Freud's ideas on scientific grounds,[97] while scholars of the humanities maintain that Freud was not a "scientist at all, but ... an interpreter".[96]

Humanistic psychology developed in the 1950s as a movement within academic psychology, in reaction to both behaviorism and psychoanalysis.[99] The humanistic approach sought to glimpse the whole person, not just fragmented parts of the personality or isolated cognitions.[100] Humanism focused on uniquely human issues, such as free will, personal growth, self-actualization, self-identity, death, aloneness, freedom, and meaning. It emphasized subjective meaning, rejection of determinism, and concern for positive growth rather than pathology.[citation needed] Some founders of the humanistic school of thought were American psychologists Abraham Maslow, who formulated a hierarchy of human needs, and Carl Rogers, who created and developed client-centered therapy. Later, positive psychology opened up humanistic themes to scientific modes of exploration.

The American Association for Humanistic Psychology, formed in 1963, declared:

Humanistic psychology is primarily an orientation toward the whole of psychology rather than a distinct area or school. It stands for respect for the worth of persons, respect for differences of approach, open-mindedness as to acceptable methods, and interest in exploration of new aspects of human behavior. As a "third force" in contemporary psychology, it is concerned with topics having little place in existing theories and systems: e.g., love, creativity, self, growth, organism, basic need-gratification, self-actualization, higher values, being, becoming, spontaneity, play, humor, affection, naturalness, warmth, ego-transcendence, objectivity, autonomy, responsibility, meaning, fair-play, transcendental experience, peak experience, courage, and related concepts.[101]

In the 1950s and 1960s, influenced by philosophers Sren Kierkegaard and Martin Heidegger and, psychoanalytically trained American psychologist Rollo May pioneered an existential branch of psychology, which included existential psychotherapy: a method based on the belief that inner conflict within a person is due to that individual's confrontation with the givens of existence. Swiss psychoanalyst Ludwig Binswanger and American psychologist George Kelly may also be said to belong to the existential school.[102] Existential psychologists differed from more "humanistic" psychologists in their relatively neutral view of human nature and their relatively positive assessment of anxiety.[103] Existential psychologists emphasized the humanistic themes of death, free will, and meaning, suggesting that meaning can be shaped by myths, or narrative patterns,[104] and that it can be encouraged by an acceptance of the free will requisite to an authentic, albeit often anxious, regard for death and other future prospects.

Austrian existential psychiatrist and Holocaust survivor Viktor Frankl drew evidence of meaning's therapeutic power from reflections garnered from his own internment.[105] He created a variation of existential psychotherapy called logotherapy, a type of existentialist analysis that focuses on a will to meaning (in one's life), as opposed to Adler's Nietzschean doctrine of will to power or Freud's will to pleasure.[106]

Personality psychology is concerned with enduring patterns of behavior, thought, and emotioncommonly referred to as personalityin individuals. Theories of personality vary across different psychological schools and orientations. They carry different assumptions about such issues as the role of the unconscious and the importance of childhood experience. According to Freud, personality is based on the dynamic interactions of the id, ego, and super-ego.[107]Trait theorists, in contrast, attempt to analyze personality in terms of a discrete number of key traits by the statistical method of factor analysis. The number of proposed traits has varied widely. An early model, proposed by Hans Eysenck, suggested that there are three traits which comprise human personality: extraversionintroversion, neuroticism, and psychoticism. Raymond Cattell proposed a theory of 16 personality factors. Dimensional models of personality are receiving increasing support, and some version of dimensional assessment will be included in the forthcoming DSM-V.

Myriad approach to systematically assess different personality types, with the Woodworth Personal Data Sheet, developed during World War I, an early example of the modern technique. The MyersBriggs Type Indicator sought to assess people according to the personality theories of Carl Jung. Behaviorist resistance to introspection led to the development of the Strong Vocational Interest Blank and Minnesota Multiphasic Personality Inventory, tests which ask more empirical questions and focus less on the psychodynamics of the respondent.[108]

Study of the unconscious mind, a part of the psyche outside the awareness of the individual which nevertheless influenced thoughts and behavior was a hallmark of early psychology. In one of the first psychology experiments conducted in the United States, C. S. Peirce and Joseph Jastrow found in 1884 that subjects could choose the minutely heavier of two weights even if consciously uncertain of the difference.[109] Freud popularized this concept, with terms like Freudian slip entering popular culture, to mean an uncensored intrusion of unconscious thought into one's speech and action. His 1901 text The Psychopathology of Everyday Life catalogues hundreds of everyday events which Freud explains in terms of unconscious influence. Pierre Janet advanced the idea of a subconscious mind, which could contain autonomous mental elements unavailable to the scrutiny of the subject.[110]

Behaviorism notwithstanding, the unconscious mind has maintained its importance in psychology. Cognitive psychologists have used a "filter" model of attention, according to which much information processing takes place below the threshold of consciousness, and only certain processes, limited by nature and by simultaneous quantity, make their way through the filter. Copious research has shown that subconscious priming of certain ideas can covertly influence thoughts and behavior.[110] A significant hurdle in this research is proving that a subject's conscious mind has not grasped a certain stimulus, due to the unreliability of self-reporting. For this reason, some psychologists prefer to distinguish between implicit and explicit memory. In another approach, one can also describe a subliminal stimulus as meeting an objective but not a subjective threshold.[111]

The automaticity model, which became widespread following exposition by John Bargh and others in the 1980s, describes sophisticated processes for executing goals which can be selected and performed over an extended duration without conscious awareness.[112][113] Some experimental data suggests that the brain begins to consider taking actions before the mind becomes aware of them.[111][114] This influence of unconscious forces on people's choices naturally bears on philosophical questions free will. John Bargh, Daniel Wegner, and Ellen Langer are some prominent contemporary psychologists who describe free will as an illusion.[112][113][115]

Psychologists such as William James initially used the term motivation to refer to intention, in a sense similar to the concept of will in European philosophy. With the steady rise of Darwinian and Freudian thinking, instinct also came to be seen as a primary source of motivation.[116] According to drive theory, the forces of instinct combine into a single source of energy which exerts a constant influence. Psychoanalysis, like biology, regarded these forces as physical demands made by the organism on the nervous system. However, they believed that these forces, especially the sexual instincts, could become entangled and transmuted within the psyche. Classical psychoanalysis conceives of a struggle between the pleasure principle and the reality principle, roughly corresponding to id and ego. Later, in Beyond the Pleasure Principle, Freud introduced the concept of the death drive, a compulsion towards aggression, destruction, and psychic repetition of traumatic events.[117] Meanwhile, behaviorist researchers used simple dichotomous models (pleasure/pain, reward/punishment) and well-established principles such as the idea that a thirsty creature will take pleasure in drinking.[116][118]Clark Hull formalized the latter idea with his drive reduction model.[119]

Hunger, thirst, fear, sexual desire, and thermoregulation all seem to constitute fundamental motivations for animals.[118] Humans also seem to exhibit a more complex set of motivationsthough theoretically these could be explained as resulting from primordial instinctsincluding desires for belonging, self-image, self-consistency, truth, love, and control.[120][121]

Motivation can be modulated or manipulated in many different ways. Researchers have found that eating, for example, depends not only on the organism's fundamental need for homeostasisan important factor causing the experience of hungerbut also on circadian rhythms, food availability, food palatability, and cost.[118] Abstract motivations are also malleable, as evidenced by such phenomena as goal contagion: the adoption of goals, sometimes unconsciously, based on inferences about the goals of others.[122] Vohs and Baumeister suggest that contrary to the need-desire-fulfilment cycle of animal instincts, human motivations sometimes obey a "getting begets wanting" rule: the more you get a reward such as self-esteem, love, drugs, or money, the more you want it. They suggest that this principle can even apply to food, drink, sex, and sleep.[123]

Mainly focusing on the development of the human mind through the life span, developmental psychology seeks to understand how people come to perceive, understand, and act within the world and how these processes change as they age. This may focus on cognitive, affective, moral, social, or neural development. Researchers who study children use a number of unique research methods to make observations in natural settings or to engage them in experimental tasks. Such tasks often resemble specially designed games and activities that are both enjoyable for the child and scientifically useful, and researchers have even devised clever methods to study the mental processes of infants. In addition to studying children, developmental psychologists also study aging and processes throughout the life span, especially at other times of rapid change (such as adolescence and old age). Developmental psychologists draw on the full range of psychological theories to inform their research.

All researched psychological traits are influenced by both genes and environment, to varying degrees.[124][125] These two sources of influence are often confounded in observational research of individuals or families. An example is the transmission of depression from a depressed mother to her offspring. Theory may hold that the offspring, by virtue of having a depressed mother in his or her (the offspring's) environment, is at risk for developing depression. However, risk for depression is also influenced to some extent by genes. The mother may both carry genes that contribute to her depression but will also have passed those genes on to her offspring thus increasing the offspring's risk for depression. Genes and environment in this simple transmission model are completely confounded. Experimental and quasi-experimental behavioral genetic research uses genetic methodologies to disentangle this confound and understand the nature and origins of individual differences in behavior.[72] Traditionally this research has been conducted using twin studies and adoption studies, two designs where genetic and environmental influences can be partially un-confounded. More recently, the availability of microarray molecular genetic or genome sequencing technologies allows researchers to measure participant DNA variation directly, and test whether individual genetic variants within genes are associated with psychological traits and psychopathology through methods including genome-wide association studies. One goal of such research is similar to that in positional cloning and its success in Huntington's: once a causal gene is discovered biological research can be conducted to understand how that gene influences the phenotype. One major result of genetic association studies is the general finding that psychological traits and psychopathology, as well as complex medical diseases, are highly polygenic,[126][127][128][129][130] where a large number (on the order of hundreds to thousands) of genetic variants, each of small effect, contribute to individual differences in the behavioral trait or propensity to the disorder. Active research continues to understand the genetic and environmental bases of behavior and their interaction.

Psychology encompasses many subfields and includes different approaches to the study of mental processes and behavior:

Psychological testing has ancient origins, such as examinations for the Chinese civil service dating back to 2200 BC. Written exams began during the Han dynasty (202 BC.AD. 200). By 1370, the Chinese system required a stratified series of tests, involving essay writing and knowledge of diverse topics. The system was ended in 1906.[131] In Europe, mental assessment took a more physiological approach, with theories of physiognomyjudgment of character based on the facedescribed by Aristotle in 4th century BC Greece. Physiognomy remained current through the Enlightenment, and added the doctrine of phrenology: a study of mind and intelligence based on simple assessment of neuroanatomy.[132]

When experimental psychology came to Britain, Francis Galton was a leading practitioner, and, with his procedures for measuring reaction time and sensation, is considered an inventor of modern mental testing (also known as psychometrics).[133]James McKeen Cattell, a student of Wundt and Galton, brought the concept to the United States, and in fact coined the term "mental test".[134] In 1901, Cattell's student Clark Wissler published discouraging results, suggesting that mental testing of Columbia and Barnard students failed to predict their academic performance.[134] In response to 1904 orders from the Minister of Public Instruction, French psychologists Alfred Binet and Thodore Simon elaborated a new test of intelligence in 19051911, using a range of questions diverse in their nature and difficulty. Binet and Simon introduced the concept of mental age and referred to the lowest scorers on their test as idiots. Henry H. Goddard put the Binet-Simon scale to work and introduced classifications of mental level such as imbecile and feebleminded. In 1916 (after Binet's death), Stanford professor Lewis M. Terman modified the Binet-Simon scale (renamed the StanfordBinet scale) and introduced the intelligence quotient as a score report.[135] From this test, Terman concluded that mental retardation "represents the level of intelligence which is very, very common among Spanish-Indians and Mexican families of the Southwest and also among negroes. Their dullness seems to be racial."[136]

Following the Army Alpha and Army Beta tests for soldiers in World War I, mental testing became popular in the US, where it was soon applied to school children. The federally created National Intelligence Test was administered to 7 million children in the 1920s, and in 1926 the College Entrance Examination Board created the Scholastic Aptitude Test to standardize college admissions.[137] The results of intelligence tests were used to argue for segregated schools and economic functionsi.e. the preferential training of Black Americans for manual labor. These practices were criticized by black intellectuals such a Horace Mann Bond and Allison Davis.[136] Eugenicists used mental testing to justify and organize compulsory sterilization of individuals classified as mentally retarded.[39] In the United States, tens of thousands of men and women were sterilized. Setting a precedent which has never been overturned, the U.S. Supreme Court affirmed the constitutionality of this practice in the 1907 case Buck v. Bell.[138]

Today mental testing is a routine phenomenon for people of all ages in Western societies.[139] Modern testing aspires to criteria including standardization of procedure, consistency of results, output of an interpretable score, statistical norms describing population outcomes, and, ideally, effective prediction of behavior and life outcomes outside of testing situations.[140]

The provision of psychological health services is generally called clinical psychology in the U.S. The definitions of this term are various and may include school psychology and counseling psychology. Practitioners typically includes people who have graduated from doctoral programs in clinical psychology but may also include others. In Canada, the above groups usually fall within the larger category of professional psychology. In Canada and the US, practitioners get bachelor's degrees and doctorates, then spend one year in an internship and one year in postdoctoral education. In Mexico and most other Latin American and European countries, psychologists do not get bachelor's and doctorate degrees; instead, they take a three-year professional course following high school.[59] Clinical psychology is at present the largest specialization within psychology.[141] It includes the study and application of psychology for the purpose of understanding, preventing, and relieving psychologically based distress, dysfunction or mental illness and to promote subjective well-being and personal development. Central to its practice are psychological assessment and psychotherapy although clinical psychologists may also engage in research, teaching, consultation, forensic testimony, and program development and administration.[142]

Credit for the first psychology clinic in the United States typically goes to Lightner Witmer, who established his practice in Philadelphia in 1896. Another modern psychotherapist was Morton Prince.[141] For the most part, in the first part of the twentieth century, most mental health care in the United States was performed by specialized medical doctors called psychiatrists. Psychology entered the field with its refinements of mental testing, which promised to improve diagnosis of mental problems. For their part, some psychiatrists became interested in using psychoanalysis and other forms of psychodynamic psychotherapy to understand and treat the mentally ill.[34] In this type of treatment, a specially trained therapist develops a close relationship with the patient, who discusses wishes, dreams, social relationships, and other aspects of mental life. The therapist seeks to uncover repressed material and to understand why the patient creates defenses against certain thoughts and feelings. An important aspect of the therapeutic relationship is transference, in which deep unconscious feelings in a patient reorient themselves and become manifest in relation to the therapist.[143]

Psychiatric psychotherapy blurred the distinction between psychiatry and psychology, and this trend continued with the rise of community mental health facilities and behavioral therapy, a thoroughly non-psychodynamic model which used behaviorist learning theory to change the actions of patients. A key aspect of behavior therapy is empirical evaluation of the treatment's effectiveness. In the 1970s, cognitive-behavior therapy arose, using similar methods and now including the cognitive constructs which had gained popularity in theoretical psychology. A key practice in behavioral and cognitive-behavioral therapy is exposing patients to things they fear, based on the premise that their responses (fear, panic, anxiety) can be deconditioned.[144]

Mental health care today involves psychologists and social workers in increasing numbers. In 1977, National Institute of Mental Health director Bertram Brown described this shift as a source of "intense competition and role confusion".[34] Graduate programs issuing doctorates in psychology (PsyD) emerged in the 1950s and underwent rapid increase through the 1980s. This degree is intended to train practitioners who might conduct scientific research.[59]

Some clinical psychologists may focus on the clinical management of patients with brain injurythis area is known as clinical neuropsychology. In many countries, clinical psychology is a regulated mental health profession. The emerging field of disaster psychology (see crisis intervention) involves professionals who respond to large-scale traumatic events.[145]

The work performed by clinical psychologists tends to be influenced by various therapeutic approaches, all of which involve a formal relationship between professional and client (usually an individual, couple, family, or small group). Typically, these approaches encourage new ways of thinking, feeling, or behaving. Four major theoretical perspectives are psychodynamic, cognitive behavioral, existentialhumanistic, and systems or family therapy. There has been a growing movement to integrate the various therapeutic approaches, especially with an increased understanding of issues regarding culture, gender, spirituality, and sexual orientation. With the advent of more robust research findings regarding psychotherapy, there is evidence that most of the major therapies have equal effectiveness, with the key common element being a strong therapeutic alliance.[146][147] Because of this, more training programs and psychologists are now adopting an eclectic therapeutic orientation.[148][149][150][151][152]

Diagnosis in clinical psychology usually follows the Diagnostic and Statistical Manual of Mental Disorders (DSM), a handbook first published by the American Psychiatric Association in 1952. New editions over time have increased in size and focused more on medical language.[153] The study of mental illnesses is called abnormal psychology.

Educational psychology is the study of how humans learn in educational settings, the effectiveness of educational interventions, the psychology of teaching, and the social psychology of schools as organizations. The work of child psychologists such as Lev Vygotsky, Jean Piaget, Bernard Luskin, and Jerome Bruner has been influential in creating teaching methods and educational practices. Educational psychology is often included in teacher education programs in places such as North America, Australia, and New Zealand.

School psychology combines principles from educational psychology and clinical psychology to understand and treat students with learning disabilities; to foster the intellectual growth of gifted students; to facilitate prosocial behaviors in adolescents; and otherwise to promote safe, supportive, and effective learning environments. School psychologists are trained in educational and behavioral assessment, intervention, prevention, and consultation, and many have extensive training in research.[154]

Industrialists soon brought the nascent field of psychology to bear on the study of scientific management techniques for improving workplace efficiency. This field was at first called economic psychology or business psychology; later, industrial psychology, employment psychology, or psychotechnology.[155] An important early study examined workers at Western Electric's Hawthorne plant in Cicero, Illinois from 19241932. With funding from the Laura Spelman Rockefeller Fund and guidance from Australian psychologist Elton Mayo, Western Electric experimented on thousands of factory workers to assess their responses to illumination, breaks, food, and wages. The researchers came to focus on workers' responses to observation itself, and the term Hawthorne effect is now used to describe the fact that people work harder when they think they're being watched.[156]

The name industrial and organizational psychology (IO) arose in the 1960s and became enshrined as the Society for Industrial and Organizational Psychology, Division 14 of the American Psychological Association, in 1973.[155] The goal is to optimize human potential in the workplace. Personnel psychology, a subfield of IO psychology, applies the methods and principles of psychology in selecting and evaluating workers. IO psychology's other subfield, organizational psychology, examines the effects of work environments and management styles on worker motivation, job satisfaction, and productivity.[157] The majority of IO psychologists work outside of academia, for private and public organizations and as consultants.[155] A psychology consultant working in business today might expect to provide executives with information and ideas about their industry, their target markets, and the organization of their company.[158]

One role for psychologists in the military is to evaluate and counsel soldiers and other personnel. In the U.S., this function began during World War I, when Robert Yerkes established the School of Military Psychology at Fort Oglethorpe in Georgia, to provide psychological training for military staff military.[34][159] Today, U.S Army psychology includes psychological screening, clinical psychotherapy, suicide prevention, and treatment for post-traumatic stress, as well as other aspects of health and workplace psychology such as smoking cessation.[160]

Psychologists may also work on a diverse set of campaigns known broadly as psychological warfare. Psychologically warfare chiefly involves the use of propaganda to influence enemy soldiers and civilians. In the case of so-called black propaganda the propaganda is designed to seem like it originates from a different source.[161] The CIA's MKULTRA program involved more individualized efforts at mind control, involving techniques such as hypnosis, torture, and covert involuntary administration of LSD.[162] The U.S. military used the name Psychological Operations (PSYOP) until 2010, when these were reclassified as Military Information Support Operations (MISO), part of Information Operations (IO).[163] Psychologists are sometimes involved in assisting the interrogation and torture of suspects, though this has sometimes been denied by those involved and sometimes opposed by others.[164]

Medical facilities increasingly employ psychologists to perform various roles. A prominent aspect of health psychology is the psychoeducation of patients: instructing them in how to follow a medical regimen. Health psychologists can also educate doctors and conduct research on patient compliance.[165]

Psychologists in the field of public health use a wide variety of interventions to influence human behavior. These range from public relations campaigns and outreach to governmental laws and policies. Psychologists study the composite influence of all these different tools in an effort to influence whole populations of people.[166]

Black American psychologists Kenneth and Mamie Clark studied the psychological impact of segregation and testified with their findings in the desegregation case Brown v. Board of Education (1954).[167]

Positive psychology is the study of factors which contribute to human happiness and well-being, focusing more on people who are currently health. In 2010 Clinical Psychological Review published a special issue devoted to positive psychological interventions, such as gratitude journaling and the physical expression of gratitude. Positive psychological interventions have been limited in scope, but their effects are thought to be superior to that of placebos, especially with regard to helping people with body image problems.

Quantitative psychological research lends itself to the statistical testing of hypotheses. Although the field makes abundant use of randomized and controlled experiments in laboratory settings, such research can only assess a limited range of short-term phenomena. Thus, psychologists also rely on creative statistical methods to glean knowledge from clinical trials and population data.[168] These include the Pearson productmoment correlation coefficient, the analysis of variance, multiple linear regression, logistic regression, structural equation modeling, and hierarchical linear modeling. The measurement and operationalization of important constructs is an essential part of these research designs.

A true experiment with random allocation of subjects to conditions allows researchers to make strong inferences about causal relationships. In an experiment, the researcher alters parameters of influence, called independent variables, and measures resulting changes of interest, called dependent variables. Prototypical experimental research is conducted in a laboratory with a carefully controlled environment.

Repeated-measures experiments are those which take place through intervention on multiple occasions. In research on the effectiveness of psychotherapy, experimenters often compare a given treatment with placebo treatments, or compare different treatments against each other. Treatment type is the independent variable. The dependent variables are outcomes, ideally assessed in several ways by different professionals.[171] Using crossover design, researchers can further increase the strength of their results by testing both of two treatments on two groups of subjects.

Quasi-experimental design refers especially to situations precluding random assignment to different conditions. Researchers can use common sense to consider how much the nonrandom assignment threatens the study's validity.[172] For example, in research on the best way to affect reading achievement in the first three grades of school, school administrators may not permit educational psychologists to randomly assign children to phonics and whole language classrooms, in which case the psychologists must work with preexisting classroom assignments. Psychologists will compare the achievement of children attending phonics and whole language classes.

Experimental researchers typically use a statistical hypothesis testing model which involves making predictions before conducting the experiment, then assessing how well the data supports the predictions. (These predictions may originate from a more abstract scientific hypothesis about how the phenomenon under study actually works.) Analysis of variance (ANOVA) statistical techniques are used to distinguish unique results of the experiment from the null hypothesis that variations result from random fluctuations in data. In psychology, the widely usd standard ascribes statistical significance to results which have less than 5% probability of being explained by random variation.[173]

Statistical surveys are used in psychology for measuring attitudes and traits, monitoring changes in mood, checking the validity of experimental manipulations, and for other psychological topics. Most commonly, psychologists use paper-and-pencil surveys. However, surveys are also conducted over the phone or through e-mail. Web-based surveys are increasingly used to conveniently reach many subjects.

Neuropsychological tests, such as the Wechsler scales and Wisconsin Card Sorting Test, are mostly questionnaires or simple tasks used which assess a specific type of mental function in the respondent. These can be used in experiments, as in the case of lesion experiments evaluating the results of damage to a specific part of the brain.[174]

Observational studies analyze uncontrolled data in search of correlations; multivariate statistics are typically used to interpret the more complex situation. Cross-sectional observational studies use data from a single point in time, whereas longitudinal studies are used to study trends across the life span. Longitudinal studies track the same people, and therefore detect more individual, rather than cultural, differences. However, they suffer from lack of controls and from confounding factors such as selective attrition (the bias introduced when a certain type of subject disproportionately leaves a study).

Exploratory data analysis refers to a variety of practices which researchers can use to visualize and analyze existing sets of data. In Peirce's three modes of inference, exploratory data anlysis corresponds to abduction, or hypothesis formation.[175]Meta-analysis is the technique of integrating the results from multiple studies and interpreting the statistical properties of the pooled dataset.[176]

A classic and popular tool used to relate mental and neural activity is the electroencephalogram (EEG), a technique using amplified electrodes on a person's scalp to measure voltage changes in different parts of the brain. Hans Berger, the first researcher to use EEG on an unopened skull, quickly found that brains exhibit signature "brain waves": electric oscillations which correspond to different states of consciousness. Researchers subsequently refined statistical methods for synthesizing the electrode data, and identified unique brain wave patterns such as the delta wave observed during non-REM sleep.[177]

Newer functional neuroimaging techniques include functional magnetic resonance imaging and positron emission tomography, both of which track the flow of blood through the brain. These technologies provide more localized information about activity in the brain and create representations of the brain with widespread appeal. They also provide insight which avoids the classic problems of subjective self-reporting. It remains challenging to draw hard conclusions about where in the brain specific thoughts originateor even how usefully such localization corresponds with reality. However, neuroimaging has delivered unmistakable results showing the existence of correlations between mind and brain. Some of these draw on a systemic neural network model rather than a localized function model.[178][179][180]

Psychiatric interventions such as transcranial magnetic stimulation and of course drugs also provide information about brainmind interactions. Psychopharmacology is the study of drug-induced mental effects.

View post:
Psychology - Wikipedia, the free encyclopedia

Neuroscience Program – University of Illinois

Welcome to the NSP at Illinois Welcome to the Neuroscience Program (NSP) at the University of Illinois at Urbana-Champaign. The NSP is an interdisciplinary program of study and research leading to the doctoral degree. We offer a rigorous yet flexible program designed to foster the growth of the student through research activities, close interactions with the faculty, and exposure to top neuroscientists through our seminar series and attendance at professional meetings.

Recognizing that there are many paths to success in neuroscience, the program imposes few specific requirements. Students design their own programs leading to the Ph.D., with oversight by faculty committees ensuring appropriate depth and breadth of training.

The NSP currently has over 85 affiliated faculty from more than 20 departments, and 70 students, studying the brain from a broad range of perspectives. We invite you to learn more about our program, research, and people.

The rest is here:
Neuroscience Program - University of Illinois

Neuroscience – Cabell Huntington Hospital – Huntington, WV

For more information, please call 304.691.1787

The neuroscience staffincludes many experienced and respectedphysicians who bring unique skills, experience and training to this world-class referral center for the Tri-State area.They havetreated patients from across the United States, as well as throughout the region. The Advanced Primary Stroke Center has earned The Joint Commission's Gold Seal of Approval by demonstrating compliance with The Joint Commission's national standards for healthcare quality and safety in disease-specific care. And thanks to the leadership of these skilled specialists, Cabell Huntington Hospital has been named a Blue Distinction Center for Spine Surgery by Highmark Blue Cross Blue Shield West Virginia and earned a Top 10% in the Nation Quality Rating for Spinal Surgery from Carechex, a medical quality rating service.

Neurology services and neurophysiology testing are available for both adults and children. Our specialists diagnose, evaluate and provide treatment for epilepsy, headache, movement disorders, multiple sclerosis, stroke and neuromuscular diseases. Neurosurgery services are available for both adults and children, including surgery for brain tumors, movement disorders, epilepsy, trigeminal neuralgia and other conditions affecting the brain, spine, spinal cord, pituitary gland and/or neurovascular system.

Dr. Tony Alberico, a board-certified neurosurgeon,offers a breadth of neurosurgical experience that rivals any in the region. He has quickly established himself as an excellent surgeon with outstanding judgment. He serves as the chairman of the Department of Neuroscience at the Joan C. Edwards School of Medicine and as the director of the Back and Spine Center. Dr. Alberico is experienced in the management of spinal disorders andin developing advances in spine care.

Dr. Paul Fergusonis a board-certified neurologist who specializes in diagnosing and treating headaches, including chronic migraines. He is experienced in managing the complexities of multiple sclerosis and providing patients with the most advanced medical treatments, neuroimaging and physical therapy. Dr. Ferguson earned his medical degree at the MU Joan C. Edwards School of Medicine and completed his residency in neurology at Wake Forest University Baptist Medical Center.

Dr. Samrina Hanif is a fellowship-trained neurologist who specializes in the diagnosis and treatment of epilepsy. Dr. Hanif earned her medical degree at Dow Medical University in Karachi, Pakistan, and she completed her residency at New York Medical College in Manhattan. Her fellowship training in epilepsy/clinical neurophysiology was completed at Vanderbilt University. Her special interests include refractory epilepsy and treating children with autism and epilepsy.

Dr. Alastair T. Hoyt, a fellowship-trained physician specializing in neurosurgery, offers diagnosis and treatment of disorders or injuries to the brain, spinal cord and/or peripheral nerves. After graduating from medical school at the University of Nebraska, Dr. Hoyt completed his residency in neurosurgery at the Medical College of Wisconsin and a fellowship at the Barrow Neurological Institute, along with additional training in Gamma Knife radiosurgery.

Dr. Paul Knowles is certified by the American Board of Pediatrics and the American Board of Psychiatry and Neurology. He completed fellowship training in pediatric neurology at Baylor University College of Medicine in Houston, Texas, and a pediatric residency at Childrens Hospital Medical Center in Akron, Ohio. He earned his medical degree at Eastern Virginia Medical School in Norfolk, Virginia. Dr. Knowles has more than 30 years of experience in pediatric neurology.

Dominika Lozowska, MD, a fellowship-trained physician specializing in neurology, offers diagnosis and treatment of disorders of the central and peripheral nervous system, such as epilepsy, Parkinsons disease and muscular sclerosis. She completed her residency in neurology at Fletcher Allen Health Center. She then completed a fellowship in neurophysiology at the University of South Florida and a neuromuscular fellowship at the University of Colorado School of Medicine.

Dr. Rida Mazagri's extensive training and experience includes a fellowship in Clinical Stroke Research at the University of Saskatchewan and a fellowship in Pediatric Neurosurgery at the University of Ottawa/Childrens Hospital of Eastern Ontario. He earned his medical degree at Al-Fateh University Medical School in Tripoli, Libya, and he is board certifed in neurological surgery. Dr. Mazagritreats adult and pediatric patients.

Paul Muizelaar, MD, PhD, an experienced, fellowship-trained neurosurgeon, has an extensive career in neurosurgery and is affiliated with the Back and Spine Center at CHH. He is certified by the Royal Dutch Board of Medical Specialties in Neurological Surgery. He earned his medical degree and doctorate at the University of Amsterdam School of Medicine, and he completed fellowship training in neurosurgery at the Medical College of Virginia.

Dr. Justin Nolte is a neurologist who specializes in stroke care and oversees Cabell Huntington Hospital's Advanced Primary Stroke Center, which has earned The Joint Commission's Gold Seal of Approval. Dr. Nolte earned his medical degree from the Marshall University Joan C. Edwards School of Medicine and completed a residency in neurology at the Medical University of South Carolina.

Dr. Mitzi Payne completed a fellowship in pediatric neurology and offers a variety of services unique to the region, including Botox injections for children suffering from spasticity caused by cerebral palsy and other disorders. She also manages intrathecal pumps implanted for severe spasticity. She manages pediatric epilepsy, including interpreting EEGs, pediatric headache disorders and other neurologic conditions unique to children.

Dr. Sona Shah is the director of CHH's Epilepsy Center, the region's first center to provide care for patients with epilepsy and other seizure disorders. Dr. Shah completed her neurology residency at SUNY Downstate Medical Center in Brooklyn, NY, as well as fellowships in neurophysiology and epilepsy at the University of Chicago. She is board certified in neurology, clinical neurophysiology, epilepsy monitoring and neuromuscular medicine.

Collectively, the members of theneuroscience staff have published hundreds of peer-reviewed scientific articles, book chapters and abstracts. They have lectured extensively nationally and internationally and have received multiple patents for medical breakthroughs. Although recognized for their academic achievements, their clinical experience and training is unparalleled in the region.

For more information or to schedule an appointment with a member of the Marshall University Department of Neuroscience, please call 304.691.1787.

See the original post here:
Neuroscience - Cabell Huntington Hospital - Huntington, WV

About Neuroscience Graduate Program | Neuroscience Graduate …

An Interdisciplinary Approach to Neuroscience

The University of California, San Francisco offers an interdisciplinary program for graduate training in neuroscience. The purpose of this program is to train doctoral students for independent research and teaching in neuroscience. Participation in Neuroscience Program activities does not require membership in the Neuroscience Program. The program welcomes attendance of all interested UCSF faculty, students and other trainees at its retreat, seminars and journal club. Our program seeks to train students who will be expert in one particular approach to neuroscientific research, but who will also have a strong general background in other areas of neuroscience and related disciplines. To achieve this objective, our students take interdisciplinary core and advanced courses in neuroscience, as well as related courses sponsored by other graduate programs. In addition, they carry out research under the supervision of faculty members in the program.

Read the original post:
About Neuroscience Graduate Program | Neuroscience Graduate ...

The diaphragm anatomy & embryology – SlideShare

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Read more from the original source:
The diaphragm anatomy & embryology - SlideShare