Research on Bizarre Rodent Genetics Solves a Mystery And Then Things Got Even Stranger – SciTechDaily

A Taiwan vole, closely related to the creeping vole described in the study. Credit: Lai Wagtail / Flickr (CC BY-NC-ND 2.0)

Open up Scott Roys Twitter bio and youll see a simple but revealing sentence: The more I learn the more Im confused. Now the rest of the scientific world can share in his confusion. The San Francisco State University associate professor of Biologys most recent research, published earlier this month in one of the scientific worlds most prestigious journals, catalogues a strange and confounding system of genes in a tiny rodent that scientists have ignored for decades.

This is basically the weirdest sex chromosome system known to science, Roy said. Nobody ordered this. But hes serving it anyway.

The owner of those chromosomes is the creeping vole, a burrowing rodent native to the Pacific Northwest. Scientists have known since the 60s that the species had some odd genes: Their number of X and Y chromosomes (bundles of DNA that play a large role in determining sex) is off from whats expected in male and female mammals.

That finding caught Roys eye when presented by a guest speaker at a San Francisco State seminar, and he realized that modern technology might be able to shed new light on the mysteries hiding in the voles DNA. After working with collaborators to disentangle the voles genetic history resulting in one of the most completely sequenced mammal genomes that exists, according to Roy the story only got stranger.

The team found that the X and Y chromosomes had fused somewhere in the rodents past, and that the X chromosome in males started looking and acting like a Y chromosome. The numbers of X chromosomes in male and female voles changed too, along with smaller pieces of DNA getting swapped between them. The researchers published their results in Science on May 7, 2021.

Drastic genetic changes like these are exceptionally rare: The way genes determine sex in mammals has stayed mostly the same for about 180 million years, Roy explains. Mammals, with few exceptions, are kind of boring, he said. Previously we would have thought something like this is impossible.

So how did the genes of this unassuming rodent end up so jumbled? Its not an easy question to answer, especially since evolution is bound to produce some strangeness simply by chance. Roy, however, is determined to figure out the why. He suspects that what the team found in the voles genome is something like the aftermath of an evolutionary battle for dominance between the X and Y chromosome.

The research couldnt have happened, Roy says, without collaborations with Oregon fish and wildlife biologists who had a creeping vole sample sitting in a lab freezer. He also teamed up with a group from Oklahoma State University when the two groups started chatting about creeping vole DNA sequences that were posted on the internet and both realized they were working on the same question.

Another key was working at a teaching-focused institution. Roy says he has the time to develop ideas with colleagues and students at SF State, and he can do research where he doesnt quite know what hell find. This is a great example of non-hypothesis-based biology, Roy explained. The hypothesis was, This system is interesting. I bet if you looked into it some more, thered be other interesting things.

It wont be the last time Roys lab goes out on a limb. He and his collaborators plan to look into the genomes of other species related to the voles to chart the evolutionary path that led to this strange system. Hell also continue DNA sequencing curiosities across the tree of life.

These bizarre systems give us a handhold to start to understand why the more common systems are the way they are and why our biology works as it does, he explained. By delving into the weirdest that nature has to offer, maybe we can come to understand ourselves better, too.

Reference: Sex chromosome transformation and the origin of a male-specific X chromosome in the creeping vole by Matthew B. Couger, Scott W. Roy, Noelle Anderson, Landen Gozashti, Stacy Pirro, Lindsay S. Millward, Michelle Kim, Duncan Kilburn, Kelvin J. Liu, Todd M. Wilson, Clinton W. Epps, Laurie Dizney, Luis A. Ruedas and Polly Campbell, 7 May 2021, Science.DOI: 10.1126/science.abg7019

Excerpt from:
Research on Bizarre Rodent Genetics Solves a Mystery And Then Things Got Even Stranger - SciTechDaily

DNA Markers Uncovered in Grape Genetics Research Reveal What Makes the Perfect Flower – SciTechDaily

Flower sex is an important factor when breeding for quality cultivars.

Wines and table grapes exist thanks to a genetic exchange so rare that its only happened twice in nature in the last 6 million years. And since the domestication of the grapevine 8,000 years ago, breeding has continued to be a gamble.

When todays growers cultivate new varieties trying to produce better-tasting and more disease-resistant grapes it takes two to four years for breeders to learn whether they have the genetic ingredients for the perfect flower.

Females set fruit, but produce sterile pollen. Males have stamens for pollen, but lack fruit. The perfect flower, however, carries both sex genes and can self-pollinate. These hermaphroditic varieties generally yield bigger and better-tasting berry clusters, and theyre the ones researchers use for additional cross-breeding.

Now, Cornell scientists have worked with the University of California, Davis, to identify the DNA markers that determine grape flower sex. In the process, they also pinpointed the genetic origins of the perfect flower. Their paper, Multiple Independent Recombinations Led to Hermaphroditism in Grapevine, published on April 13, 2021, in the Proceedings of the National Academy of Science.

This is the first genomic evidence that grapevine flower sex has multiple independent origins, said Jason Londo, corresponding author on the paper and a research geneticist in the USDA-Agricultural Research Service (USDA-ARS) Grape Genetics Unit, located at Cornell AgriTech. Londo is also an adjunct associate professor of horticulture in the School of Integrative Plant Science (SIPS), part of the College of Agriculture and Life Sciences.

This study is important to breeding and production because we designed genetic markers to tell you what exact flower sex signature every vine has, Londo said, so breeders can choose to keep only the combinations they want for the future.

Today, most cultivated grapevines are hermaphroditic, whereas all wild members of the Vitis genus have only male or female flowers. As breeders try to incorporate disease-resistance genes from wild species into new breeding lines, the ability to screen seedlings for flower sex has become increasingly important. And since grape sex cant be determined from seeds alone, breeders spend a lot of time and resources raising vines, only to discard them several years down the line upon learning theyre single-sex varieties.

In the study, the team examined the DNA sequences of hundreds of wild and domesticated grapevine genomes to identify the unique sex-determining regions for male, female and hermaphroditic species. They traced the existing hermaphroditic DNA back to two separate recombination events, occurring somewhere between 6 million and 8,000 years ago.

Londo theorizes that ancient viticulturists stumbled upon these high-yielding vines and collected seeds or cuttings for their own needs freezing the hermaphroditic flower trait in domesticated grapevines that are used today.

Many wine grapes can be traced back to either the first or second event gene pool. Cultivars such as cabernet franc, cabernet sauvignon, merlot, and Thompson seedless are all from the first gene pool. The pinot family, sauvignon blanc, and gamay noir originate from the second gene pool.

What makes chardonnay and riesling unique is that they carry genes from both events. Londo said this indicates that ancient viticulturalists crossed grapes between the two gene pools, which created some of todays most important cultivars.

Documenting the genetic markers for identifying male, female and perfect flower types will ultimately help speed cultivar development and reduce the costs of breeding programs.

The more grape DNA markers are identified, the more breeders can advance the wine and grape industry, said Bruce Reisch, co-author and professor in both the Horticulture and the Plant Breeding and Genetics sections of SIPS. Modern genetic sequencing technologies and multi-institutional research collaborations are key to making better grapes available to growers.

Reference: Multiple independent recombinations led to hermaphroditism in grapevine by Cheng Zou, Mlanie Massonnet, Andrea Minio, Sagar Patel, Victor Llaca, Avinash Karn, Fred Gouker, Lance Cadle-Davidson, Bruce Reisch, Anne Fennell, Dario Cantu, Qi Sun and Jason P. Londo, 9 April 2021, Proceedings of the National Academy of Sciences.DOI: 10.1073/pnas.2023548118

Funding for this study was provided by a Specialty Crop Research Initiative Competitive Grant from the USDA National Institute of Food and Agriculture.

Co-authors on the paper also include Cheng Zou and Qi Sun at the Cornell Institute of Biotechnology; Melnie Massonnet, Andrea Minio and Dario Cantu at UC Davis; Lance Cadle-Davidson at the USDA-ARS Grape Genetics Unit; Victor Llaca at Corteva Agriscience; Avinash Karn and Fred Gouker in the Horticulture Section of SIPS; and Sagar Patel and Anne Fennell of South Dakota State University.

Erin Rodger is the senior manager of marketing and communications for Cornell AgriTech.

View original post here:
DNA Markers Uncovered in Grape Genetics Research Reveal What Makes the Perfect Flower - SciTechDaily

New peanut has a wild past and domesticated present – Johnson City Press (subscription)

ATHENS The wild relatives of modern peanut plants have the ability to withstand disease in ways that modern peanut plants cant. The genetic diversity of these wild relatives means that they can shrug off the diseases that kill farmers peanut crops, but they also produce tiny nuts that are difficult to harvest because they burrow deep in the soil.

Consider it a genetic trade-off during its evolution, the modern peanut lost its genetic diversity and much of the ability to fight off fungus and viruses, but gained qualities that make peanuts so affordable, sustainable and tasty that people all over the world grow and eat them.

Modern peanut plants were created 5,000 to 10,000 years ago, when two diploid ancestors plants with two sets of chromosomes came together by chance, and became tetraploids plants with four sets of chromosomes. While domesticated peanuts traveled around the world and show up in cuisine from Asia to Africa to the Americas, their wild relatives stayed close to home in South America.

Over the past several years, researchers at the University of Georgia, particularly at the Wild Peanut Lab in Athens, have been homing in on the genetics of those wild relatives and detailing where those resiliency traits lie in their genomes. The goal has always been to understand wild peanut varieties well enough to make use of the advantageous ancient genes the ones the wild relatives have, but modern peanuts lost while holding onto the modern traits that farmers need and consumers want.

Most of the wild species still grow in South America, said Soraya Leal-Bertioli, who runs the Wild Peanut Lab with her husband, David Bertioli. They are present in many places, but you dont just come across them on the streets. One has to have the collectors eye to spot them in the undergrowth.

Those wild plants cant breed with other peanuts in nature any longer because they have only two sets of chromosomes.

The wilds are ugly distant relatives that peanut does not want to mix with, Leal-Bertioli said, but we do the match making.

Researchers in Athens and Tifton have successfully crossed some of those wild species together to create tetraploid lines that can be bred with peanuts. Those new lines will give plant breeders genetic resources that will lead to a bumper crop of new varieties with disease resistance and increased sustainability. The newly released lines wont produce the peanuts that go into your PB&J tomorrow, but they are the parents of the plants that farmers will grow in coming years.

The Journal of Plant Registrations published the details about the first of these germplasm lines this month. The lines were created by a team led by the Bertiolis, who conduct peanut research through the College of Agricultural and Environmental Sciences Institute for Plant Breeding, Genetics and Genomics. They also manage separate global research projects for the Feed the Future Innovation Lab for Peanut, a U.S. Agency for International Development project to increase the global food supply by improving peanuts.

The new lines developed by the Bertiolis are resistant to early and late leaf spot, diseases that cost Georgia peanut producers $20 million a year, and root-knot nematode, a problem that few approved chemicals can fight. Called GA-BatSten1 and GA-MagSten1, they are induced allotetraploids, meaning they are made through a complex hybridization that converts the wild diploid species into tetraploids.

The second set of new varieties comes from work done in Tifton and led by Ye Juliet Chu, a researchers in Peggy Ozias-Akins lab within the CAES Department of Horticulture. These three varieties are made from five peanut relatives and show resistance to leaf spot. One is also resistant to tomato spotted wilt virus, a disease that almost wiped out peanut cultivation in the U.S. in the 1990s.

Creating the first fertile allotetraploids is a challenge, but then scientists can cross them with peanuts and, through generations, select for the right traits. Plant breeders will be able to take these lines made from peanuts wild relatives and cross them with modern domesticated peanuts to get the best of both a plant that looks like peanuts and produces nuts with the size and taste of modern varieties, but that has the disease-fighting ability of the wild species.

In Tifton, for example, the team has crossed the wild species with cultivated peanuts to get a line thats 25% wild and 75% cultivated. Randomly breeding the two together will create some plants with small seeds, weak pegs, sprawling growth pattern and low yield, but by using genetic mapping, breeders can find the plants that carry disease-fighting genes and also have attractive market traits.

We plan to perform genetic mapping with these materials and define the beneficial wild genomic regions for molecular breeding, Chu said. We still need to define the genomic regions in the synthetic allotetraploids conferring desirable traits and specifically integrate those regions into cultivated peanuts.

While plant breeders have known the value of the diversity in wild peanut species for decades, they couldnt keep track of those valuable wild genes until recently. The peanut industry in Georgia and other states has invested in work to sequence peanuts and the two ancestor species, knowing that the work to understand the peanut genome would pay off. With genetic markers developed using the genome, breeders not only can tell that a plant has a desirable trait, they know what genome regions are responsible for that trait and can combine DNA profiling with traditional field selection to speed the complex process of developing a new variety.

It streamlines everything, David Bertioli said. You can make a cross, which produces 1,000 seeds, but before planting them, their DNA can be profiled. That way you can see that only 20 of those plants are ideal for further breeding. Forty years ago, youd have to plant them all, making the process much more cumbersome.

Marker-assisted selection for peanut breeding has been implemented in Ozias-Akins lab in Tifton for the past decade. Applying genetic markers associated with resistance from wild peanuts using this selection platform will accelerate the deliverance of peanut varieties pyramided with superior agronomic performance and strong disease resistance.

With ongoing work, the Journal of Plant Registration will document the release of other peanut germplasm with resistance to important diseases. Releasing the lines, along with the molecular markers for their advantageous traits, provides the peanut-breeding community with genetic resources to produce more resilient crops.

In the past, we knew where we were going, but it was like everyone drew their own map, David Bertioli said. Now, its like we have GPS. (Scientists) can tell each other, Here are my coordinates. What are yours? And all the data is published.

Breeders can access the seeds of the wild species crosses through the USDA-ARS National Plant Germplasm System in Fort Collins, Col., or in the U.S. Department of Agricultures Plant Genetic Resources and Conservation Unit in Griffin.

For more information about peanut research being performed at UGA, visit peanuts.caes.uga.edu.

Follow this link:
New peanut has a wild past and domesticated present - Johnson City Press (subscription)

Variety is the spice of life… and key to saving wildlife – Pursuit

In the critical battle against extinction, conservationists use a variety of tactics to try to save species.

One of the most fundamental tools is maintaining the amount of variation of genetic material (DNA) in a group of animals - this is described as their genetic diversity. In general, the greater the genetic diversity, the higher chance of long-term survival.

This technique works because a wider range of genes and gene variants is more likely to enable a species to adapt to unexpected conditions, including new diseases and warmer climates.

Just like having a small pack of playing cards, if we dont have many to choose from, our options are limited.

The Tasmanian Devil has faced this issue, persisting as populations that have in the past been small, with reducing genetic diversity.

This limited genetic variation has meant the Devils immune system has reduced genetic options to adapt and fight off the contagious cancer known as the Devil Facial Tumour Disease.

Read more

The field of conservation, genetics deals mainly with strategies to conserve or enhance genetic diversity within species populations to promote their capacity to adapt, reduce the negative effects of inbreeding and random genetic drift, and ultimately, decrease their extinction risk.

So, from a conservation genetic perspective, a high level of genetic diversity within a species population (compared with other populations from the same species) generally reflects a healthy viable population.

A recent perspective has challenged this school of thought, arguing that the amount of genetic variation present in a population is not an important consideration for their conservation.

Because this view is not supported by the literature and ignores well-established evolutionary principles, its concerning that it may affect how conservation genetic strategies are applied in future.

Genetic variation is measured by heterozygosity which is the presence of different versions of the same gene known as alleles at a number of locations across the genome of individuals.

The process of inbreeding leads to increased sameness or homozygosity that is the same versions of a gene are the same allele through mating between related individuals.

Read more

Deleterious alleles (versions that negatively affect health) reduce the likelihood of reproducing for an individual, and are typically expressed when homozygous.

This results in a decrease the individual and population chance of survival, known as fitness, and has been seen in the Helmeted Honeyeater.

For an accurate estimation of the health of a population from a genetic viewpoint, both the diversity and sameness of genetic makeup need to be considered.

In making their arguments, the authors of the recent perspective separate the effects of genetic variation that influences traits important for survival the so-called adaptive or functional variation from genetic variation that does not, or neutral variation.

Variation in genes that directly affect disease susceptibility or drought tolerance would be regarded as adaptive, whereas variation in genes that do not affect these traits or any other traits would be regarded as neutral.

However, its not usually possible to distinguish these types of variation, so conservation genetics typically assesses variation without reference to whether it is neutral or adaptive.

In a recent publication, we discuss the difficulty associated with identifying adaptive diversity, particularly in relation to how genomic information can be used to predict the future vulnerability of species under climate change.

Within this paper, we highlight that while genomics is providing valuable insights into processes like inbreeding, theres a need to further develop approaches based on functional genes before we can use genomics to predict how species may genetically evolve to deal with future climate change.

Read more

Increasingly, variation in conservation genetics is now being characterised based on DNA sequence variation thats scattered throughout the thousands of nucleotides that make up an organisms genome - so-called single nucleotide polymorphism or SNP markers.

This variation is regarded as a reasonable approximation of adaptive potential, particularly as adaptation involving traits like growth ability, stress tolerance and even disease tolerance are also scattered throughout the genome.

Any sign of variation in the genome is taken as a signal of variation more generally.

Research shows that when flies, birds and other organisms adapt to new environments, hundreds or even thousands of genes throughout the genome can be part of the adaptation process and that these are often inconsistent between evolutionary events, making it hard to identify specific genes involved in adaptation.

There is plenty of evidence that overall levels of genetic variation, regardless of whether its adaptive or neutral, affects the rate of adaptation of populations and high levels reduce the probability of populations becoming extinct.

The best studies come from careful laboratory experiments where a large number of populations derived from the same source, but differing in genetic variation, are compared.

Results from these experiments show that larger and genetically diverse populations have much lower extinction rates in experimental systems like flies and crustaceans.

Read more

Under field conditions, theres also a lot of evidence that an injection of new genetic variants boosts the fitness of threatened species. This includes the genetic rescue strategy used successfully to prevent the extinction of the Mount Buller Mountain Pygmy-possum in Victoria.

The initial recovery of genetic health in these populations is partly associated with decreasing inbreeding but, in the longer term, the increase in genetic variation will be important for adaptation.

But these genetic strategies cannot succeed, without addressing threats like habitat destruction and invasive predators both of which went hand-in-hand with the genetic rescue of the Mountain Pygmy-possum.

There is no doubt that many species lacking genetic variation can still be highly successful. This includes many weedy species of animals and plants that do not have much genetic variation including pest species that reproduce clonally like many aphids.

There are also some highly invasive species in Australia that have limited diversity, including foxes, carp, and deer and more. But these species can often reproduce quickly, are freed from natural predators and competitors, and are often generalists, resulting in the quick expansion of populations and range in non-native environments.

Conservation genetics does not focus on such comparisons between species instead, it tends to focus on native species of conservation concern where the relative fitness of populations is linked directly to their relative levels of genetic variation.

Read more

As we head into an uncertain world, it is important to ensure that threatened species have the best chance of surviving changes to their environment.

There are different ways of boosting genetic variation in populations that have become genetically vulnerable, including the deliberate introduction of individuals from other populations and the re-establishment of habitat corridors.

All of these efforts should coincide with restoration programs that help conservation-dependent species maintain large population sizes, which will in turn enable the maintenance of high levels of genetic diversity, increasing resilience and adaptive capacity.

These are key principles that must be followed given that adaptive changes to environmental stressors remain unpredictable at the genetic level, are complex involving many genes and are likely to depend on biological as well as environmental factors.

When it comes to conservation efforts in the immediate future, assessing genetic variation across a species entire genome must be pivotal in our decision making.

Banner: Mountain Pygmy Possum/ Andrew Weeks

Read the original:
Variety is the spice of life... and key to saving wildlife - Pursuit

Electrocardiogram 1: purpose, physiology and practicalities – Nursing Times

An electrocardiogram monitors the hearts electrical activity and is used in many clinical settings. This article explores how the technique works and is undertaken

An electrocardiogram assesses the hearts electrical activity; it is commonly used as a non-invasive monitoring device in many different healthcare settings. This article, the first in a three-part series, discusses cardiac electrophysiology, indications for an electrocardiogram, monitoring and troubleshooting.

Citation: Jarvis S (2021) Electrocardiogram 1: purpose, physiology and practicalities. Nursing Times [online]; 117: 6, 22-26.

Author: Selina Jarvis is research nurse, Guys and St Thomas NHS Foundation Trust.

An electrocardiogram (ECG) is a quick bedside investigation that assesses the electrical activity of the heart. It is a non-invasive, cheap technique that provides critical information about heart rate and rhythm, and helps assess for cardiac disease. ECG monitoring is used often in many different healthcare settings, including acute care, cardiac care and preoperative assessment.

This article, the first in a three-part series, discusses cardiac electrophysiology, indications for an ECG, monitoring and troubleshooting. Part2 of the series will take a methodical approach to interpretation, with a focus on cardiac ischaemia; part3 will explore cardiac rhythm and conduction abnormalities.

The heart is an organ that acts as a mechanical pump; it consists of four chambers (right and left atria, and right and left ventricles) that contract sequentially during the cardiac cycle and are regulated by an electrical conducting system. To understand the basics of an ECG, it is important to consider the normal electrophysiology of the heart, in which a cardiac electrical impulse is generated and transmitted to the heart muscle, leading to contractions (the heartbeat).

There are two main cell types in the heart:

Cardiomyocytes contract and relax in response to an electrical stimulus. During their resting state, inside the cells there are high internal levels of potassium ions (K+), compared with outside the cells; along with negatively charged proteins, which creates a chemical gradient. Outside the cardiomyocytes there are more sodium ions (Na+) and calcium ions (Ca2+) compared with inside the cell. Overall, this means there is a voltage difference across the cell membrane, called transmembrane potential (TMP). When there is net movement of Na+ and Ca2+ into the cell, TMP becomes more positive; when there is net movement of positive ions out of it, TMP becomes more negative.

In response to an electrical stimulus, cardiomyocytes become depolarised and fast Na+ channels open on the cell membrane, allowing Na+ into the cell; because this is positively charged, the TMP becomes more positive, increasing to -70millivolts (mV) (resting potential is -90mV). This is the point at which enough Na+ fast channels have opened to generate an inward Na+ current, and is known as the threshold potential. When the charge becomes greater than -40mV, L-type calcium channels open and allow an inward flux of Ca2+. This results in excitation-contraction coupling, which leads to the contraction of muscles in the heart. Following this, repolarisation occurs; the cardiac membrane potential returns to the resting state and no muscle contraction occurs.

The hearts electrical conducting system (Fig1) regulates its overall electrical activity and includes the following components:

Each heartbeat is initiated by an electrical impulse generated by the SAN; this impulse passes through the atria to the AVN, then through the right and left ventricles, the bundle of His, subsequent bundle branches and the Purkinje fibres. As a result, the atria and ventricles contract sequentially as the impulse is conducted through the different regions of the heart. In normal circumstances, the SAN is the hearts pacemaker; however, if there is a problem with the SAN, another conducting region centre such as the AVN, bundle of His or bundle branches can assume the role of the pacemaker in an occurrence known as an escape rhythm (Jarvis and Saman, 2018; Newby and Grubb, 2018).

In healthy individuals, the chambers of the heart contract and relax in a coordinated manner, referred to as systole and diastole respectively. The right and left atria synchronise during atrial systole and diastole, while the right and left ventricles synchronise during ventricular systole and diastole. One complete cycle of these events is called the cardiac cycle, during which the pressure in the cardiac chambers rises and falls, causing the opening and closure of heart valves that regulates blood flow between the chambers.

Pressures on the left side of the heart are around five times higher than those on the right side, but the same volume of blood is pumped per cardiac beat. In the cardiac cycle, blood moves from high- to low-pressure areas (Marieb and Keller, 2018).

The ECGs origin dates back to the discovery of the heart muscles electric activity. In 1901, Willem Einthoven made a breakthrough that facilitated the first steps towards electrocardiography, for which he subsequently won a Nobel Prize in 1924 (Yang et al, 2015).

ECGs are used as a technique to diagnose cardiac disease and to detect abnormal heart rhythm. They may also be used as a general health assessment in certain occupations, including aviation, diving and the military (Chamley et al, 2019). According to professional societies, adequate education for medical staff is critical for ECG monitoring and developing skills in interpreting waveforms and ECG data (Sandau et al, 2017).

In routine clinical practice, there are four main approaches to monitoring cardiac rhythm:

The 12-lead ECG is a non-invasive method of monitoring the hearts electrical activity. This bedside test can provide important diagnostic information or be used as part of a baseline assessment; Box1 outlines some indications for using it.

If there is a concern that a patients acute symptoms may have a cardiac cause, continuous cardiac monitoring might be used in a hospital setting. This may help with:

Continuous cardiac monitoring is also an important component of non-invasive monitoring of vital signs, with clinical benefits in medical ward settings (Sun et al, 2020).

The ECG is a graphical representation of the hearts electrical activity, plotting its voltage on a vertical axis against time on a horizontal axis. It is recorded onto ECG paper, which runs at a speed of 25mm per second. Standard pink ECG paper is made up of 5x5mm squares, each containing 25 smaller 1x1mm squares. The 1mm width of each small square represents 40milliseconds. On the vertical axis, the height of an ECG wave or deflection represents its amplitude (Prutkin, 2020). Fig2 shows what a normal ECG looks like and its relationship with the stages of the cardiac cycle.

During the normal cardiac cycle, the atrial contraction that takes place is associated with a P-wave (atrial depolarisation) and is of low amplitude because the muscle is relatively thin in the atria. This contrasts with the QRS complex, which represents the electrical impulse as it spreads through the ventricles (ventricular depolarisation). The first deflection of the QRS complex is the Q-wave, which is a negative wave that begins septal depolarisation. The R-wave represents depolarisation of the left ventricular myocardium and the next negative deflection is the S-wave, which represents terminal depolarisation. The T-wave occurs after that and represents the repolarisation of the ventricles.

The ECG also records a number of other parameters:

It is important to know the normal ranges for the various ECG parameters (Table1): if any measurements are outside the normal range, thought and investigation are needed to ascertain why and decide on a course of action. Parts 2 and 3 of this series will discuss this in more detail.

It is important to remember that the electrical lead actually represents the differences in electrical potentials measured in two points in space. The conduction of electrical impulses between these two points in space can be detected via electrodes that are positioned at various points on the body; this is then displayed as a waveform on the ECG machine/monitor.

There are several configurations of electrode positioning; continuous ECG monitoring uses a 3-lead configuration but the standard 12-lead ECG comprises:

To position the chest electrodes accurately, it is important to first identify the sternal angle (angle of Louis); this is done by feeling the bony prominence at the top of the sternum, which articulates with the second rib above the second intercostal space. By moving the fingers downwards, the fourth intercostal space can be felt: here, the electrodes for V1 and V2 should be placed to the right and left of the sternum respectively. By feeling the fifth intercostal space and moving the fingers to the middle of the clavicle, V4 can be placed on the midclavicular line. V3 should then be placed midway between V2 and V4. V5 is placed in the fifth intercostal space, more lateral to the anterior axillary line, and V6 is placed in the fifth intercostal space in the midaxillary line.

To record the limb leads (Fig3b), four electrodes are placed on the body. In the upper limbs, an electrode pad is placed below the right clavicle (arm), the next electrode pad is placed below the left clavicle (arm); in the lower limbs, a cable is connected to an electrode pad placed on the left hip/ankle (LL) and on the right hip/ankle (RL).

It is important to follow local policy. All of the limb electrodes are placed on bony areas, rather than muscle, to avoid motion artifact caused by muscle oscillation. Positioning electrodes in this formation allows the heart to be electrically mapped in three dimensions.

When undertaking any cardiac monitoring, the first step is to give the patient a simple explanation of the purpose of the test and what they should expect, as well as gaining their informed consent. It is important to ensure they are not allergic to the gel used on the ECG electrodes by asking if they have had any previous reactions.

It is critical that the health professional can accurately place the electrodes this will help avoid inaccurate diagnosis and treatment and it is important to have good contact between the electrode and the skin, which should be clean and dry. Excessive hair may need to be shaved and oily skin cleaned with alcohol or gauze. The electrodes are then attached to the patient in line with the machines instructions. The ECG is displayed on the machines monitor and should be checked for clarity, wave size and any interference.

Inadequate ECG monitoring can be dangerous; for example, misreading artifacts (electrocardiographic impulses unrelated to cardiac electrical activity) during ECG monitoring can be costly and cause delays to care. Other potential problems and how to resolve them are listed in Table 2.

An excellent ECG trace must be acquired to aid appropriate interpretation and provide the best care. The Society of Cardiological Science and Technologys (2020) ECG guidance has more information about the reporting standards used by professional societies.

ECG monitoring is standard for patients in a variety of settings. Understanding the basic physiology underpinning the electrical and mechanical events of the heart is crucial for ECG interpretation. Part 2 of this series will focus on this and present important ischaemic pathologies, while part 3 will cover cardiac rhythm disorders and conduction defects.

Selina Jarvis was a recipient of the Mary Seacole Development Award and is focused on improving care for patients with cardiac disease.

References

Chamley RR et al (2019) ECG interpretation. European Heart Journal; 40: 32, 2663-2666.

Jarvis S, Saman S (2018) Cardiac system 1: anatomy and physiology. Nursing Times [online]; 114: 2, 34-37.

Marieb EN, Keller S (2018) Essentials of Human Anatomy and Physiology. Pearson.

Newby DE, Grubb NR (2018) Cardiovascular disease. In: Ralston SH et al (eds) Davidsons Principles and Practice of Medicine. Elsevier.

Prutkin JM (2020) ECG Tutorial: Electrical Components of the ECG. uptodate.com

Sandau KE et al (2017) Update to practice standards for electrocardiographic monitoring in hospital settings: a scientific statement from the American Heart Association. Circulation; 136: 19, e273-e344.

Society of Cardiological Science and Technology (2020) Clinical Guidelines by Consensus: ECG Reporting Standards and Guidance. SCST.

Sun L et al (2020) Clinical impact of multi-parameter continuous non-invasive monitoring in hospital wards: a systematic review and meta-analysis. Journal of the Royal Society of Medicine; 113: 6, 217-224.

Yang XL et al (2015) The history, hotspots, and trends of electrocardiogram. Journal of Geriatric Cardiology; 12: 4, 448-456.

Here is the original post:
Electrocardiogram 1: purpose, physiology and practicalities - Nursing Times

Women and endurance running part one: how to train with your cycle – Canadian Running Magazine

Dr. Stacy Simsis a researcher, entrepreneur, recreational athlete and scientist whose area of expertise is exercise physiology and sports nutrition. Early in her career, she became frustrated by the fact that the vast majority of sports science treated women like small men most studies were conducted on men, and all the training, recovery and nutrition principles we learned from those studies were applied to women, despite the fact that female physiology is different from that of a man. We sat down with her to talk about these differences to determine how women runners can work with their bodies to become stronger, faster and healthier athletes.

RELATED: Why sports medicine research needs more women

Today, in Part One of this series, we will be diving into the female menstrual cycle and how it affects training. Part Two will cover nutrition strategies to boost performance throughout your cycle and how contraceptives affect training, and Part Three will look at puberty, perimenopause and menopause.

One of the most obvious differences between women and men, of course, is the female menstrual cycle. For years, a womans period was seen as being detrimental to her performance, but Sims says this is entirely false.

If you dont have a period, its detrimental [to your performance], she explains. Having a period means youre healthy, youre adapting and youre resilient to stress.

Sims explains that the reason having a period developed such a negative connotation in sports is because of the way sport developed. In the beginning, she says it acted as a male demonstration of aggression, with an emphasis on traditionally male qualities, like speed, strength, aggression and power. There has always been a taboo around the female menstrual cycle, so when you bring that into the sporting context, it came across as a weakness. Because of this, the idea that not having a period meant you were just as strong or trained just as hard as the men became endemic in sport.

This couldnt be farther from the truth. Having a period means youre getting enough nutrients to support your health and your training, your body is responding well to training adaptations and stress, your sleep patterns are good, your endocrine system is healthy and youre in an energy balance. No longer getting a regular period is the first red flag that something is amiss, and sets you up for health complications down the road, like loss of bone density, irregular sleep patterns and hormone dysfunction, among others.

Sims says that for so long, women have been told that when theyre on their periods, they should feel flat, tired, awful and that they should be hiding. Instead, she argues, we should be telling women the opposite that their periods are an opportunity to increase the intensity of their training sessions.

The more we get women to move during their periods, the better it is and the less symptomatology they have, she explains.

From a physiology standpoint, this also makes a lot of sense. The week that youre on your period (days one through seven of your cycle) is when your hormones are at their lowest point, and this makes your body more resilient to stress. This, then, is the time to do more high-intensity sessions, because you recover much better. The only caveat to this, says Sims, is women who experience heavy bleeding during the first couple of days of their period. In this case, you want to keep moving but shift the focus from high intensity to technical work like drills, or simply moving for movings sake. You can hit your training hard again once the heavy bleeding subsides.

The myths and perceptions around bleeding need to be extracted from the training conversation, says Sims.

Around ovulation, which is usually around day 14 of the cycle for most women, is another good time to schedule a hard training session. After that, as your levels of estrogen and progesterone begin to rise again, Sims suggests focusing more on steady-state runs. Finally, the five days before your period starts, which is when your body is most affected by hormones, should be treated more like a de-load or off-week. This is the time to back off the intensity and focus on other aspects of training like running drills, de-loading in the gym and working on technique. Every woman will be slightly different, so its important to track your cycle and take note of the days you feel better and the days you feel worse, and adjust your training plan accordingly. While there are many ways to do this, Sims is a big fan of the app, Wild AI.

If youre coaching a team of female athletes, Sims recommends coming up with a system that allows you to keep track of where each of your athletes are at, so that you can adjust their training accordingly (or, at the very least, adjust your expectations of individual athletes depending on the day).

This is where training is different than performance, says Sims. Train according to your menstrual cycle, but we know that the psychological aspect of performance supersedes the physiological.

If, for example, your race ends up falling on a day during your cycle that you typically dont feel your best, its easy to let that get to your head. Putting certain nutrition interventions in place and boosting yourself up mentally will help you overcome whatever physiological downfall you might be experiencing.

We have to separate out performance versus training, which hasnt been done well yet, explains Sims. When we talk about performance, theres never a negative point in the menstrual cycle. When we talk about training, there are ups and downs. We can get better training adaptations when our bodies are more resilient to stress, and then start to taper down to support that hard training. But for performance, just go, just hit it hard.

The key takeaway from this is that a womans period should not be seen as a detriment to performance, but rather as a tool to make her a better athlete. If women can learn to work with their physiology rather than against it, they will be healthier, happier and faster runners.

RELATED: Exercising and your period: changing the conversation

Read more here:
Women and endurance running part one: how to train with your cycle - Canadian Running Magazine

Is riding an electric bike good exercise, or just convenient transportation? – The Irish Times

Does riding an electric bike to work count as exercise, and not just a mode of transportation?

It can, if you ride right, according to a pragmatic new study comparing the physiological effects of e-bikes and standard road bicycles during a simulated commute. The study, which involved riders new to e-cycling, found that most could complete their commutes faster and with less effort on e-bikes than standard bicycles, while elevating their breathing and heart rates enough to get a meaningful workout.

But the benefits varied and depended, to some extent, on how peoples bikes were adjusted and how they adjusted to the bikes. The findings have particular relevance at the moment, as pandemic restrictions loosen and offices reopen, and many of us consider options other than packed trains to move ourselves from our homes to elsewhere.

Few people bike to work. Asked why, many tell researchers that bike commuting requires too much time, perspiration and accident risk. Simultaneously, though, people report a growing interest in improving their health and reducing their ecological impact by driving less.

In theory, both these hopes and concerns could be met or minimised with e-bikes. An alluring technological compromise between a standard, self-powered bicycle and a scooter, e-bikes look almost like regular bikes but are fitted with battery-powered electric motors that assist pedalling, slightly juicing each stroke.

With most e-bikes, this assistance is small, similar to riding with a placid tailwind, and ceases once you reach a maximum speed of about 30km/h or stop pedalling. The motor will not turn the pedals for you.

Essentially, e-bikes are designed to make riding less taxing, which means commuters should arrive at their destinations more swiftly and with less sweat. They can also provide a psychological boost, helping riders feel capable of tackling hills they might otherwise avoid. But whether they also complete a workout while e-riding has been less clear.

So, for the new study, which was published in March in the Translational Journal of the American College of Sports Medicine, researchers at Miami University in Oxford, Ohio decided to ask inexperienced cyclists to faux-commute. To do so, they recruited 30 local men and women, aged 19 to 61, and invited them to the physiology lab to check their fitness levels, along with their current attitudes about e-bikes and commuting.

Then, they equipped each volunteer with a standard road bike and an e-bike and asked them to commute on each bike at their preferred pace for approximately 5km. The cyclists pedalled around a flat loop course, once on the road bikes and twice with the e-bike. On one of these rides, their bike was set to a low level of pedal assistance, and on the other, the oomph was upped until the motor sent more than 200 watts of power to the pedals. Throughout, the commuters wore timers, heart rate monitors and facial masks to measure their oxygen consumption.

Afterward, to no ones surprise, the scientists found that the motorised bikes were zippy. On e-bikes, at either assistance level, riders covered the 5km several minutes faster than on the standard bike about 11 or 12 minutes on an e-bike, on average, compared to about 14 minutes on a regular bike. They also reported that riding the e-bike felt easier. Even so, their heart rates and respiration generally rose enough for those commutes to qualify as moderate exercise, based on standard physiological benchmarks, the scientists decided, and should, over time, contribute to health and fitness.

But the cyclists results were not all uniform or constructive. A few riders efforts, especially when they used the higher assistance setting on the e-bikes, were too physiologically mild to count as moderate exercise. Almost everyone also burned about 30 per cent fewer calories while e-biking than while road riding 344 to 422 calories on average on an e-bike versus 505 calories on a regular bike which may be a consideration if someone is hoping to use bike commuting to help lose weight.

And several riders told the researchers they worried about safety and control on the e-bikes, although most, after the two rides, reported greater confidence in their bike-handling skills, and found the e-commutes, compared to the road biking, more fun.

This study, though, was obviously small-scale and short-term, involving only three brief pseudo-commutes. Still, the findings suggest that riding an e-bike, like other forms of active transport, can be as good for the person doing it as for the environment, says Helaine Alessio, the chair of the department of kinesiology at Miami University, who led the new study with her colleague Kyle Timmerman and others.

But to increase your potential health benefits the most, she says, keep the pedal assistance level set as low as is comfortable for you. Also, for the sake of safety, practice riding a new e-bike or any standard bike on a lightly trafficked route until you feel poised and secure with bike handling.

Wear bright, visible clothing, too, and choose your commuting route wisely, Dr Alessio says. Look for bike paths and bike lanes whenever possible, even if you need to go a little bit out of your way. New York Times

View original post here:
Is riding an electric bike good exercise, or just convenient transportation? - The Irish Times

Quantifying stress & anxiety: Why corporate wellness programs will play a pivotal role in this paradigm shift – MedCity News

The past decade has seen us come on leaps and bounds as a society in our awareness and understanding of the scale and impact of mental health problems. In recent years, the focus has switched somewhat from reaction to prevention in parallel with the healthcare industry as a whole, in a bid to secure the sustainability of care services.

The economic impact of the mental health epidemic is a key driver behind governments and businesses move towards more preventive wellbeing initiatives. For instance, the World Health Organization (WHO) estimates that mental health problems in the workplace cost the global economy $1 trillion annually in lost productivity.

Stress and anxiety contribute heavily to this statistic. Stress is defined as the bodys reaction to feeling threatened or under pressure. Anxiety, which is often linked to stress, is defined as a feeling of unease, such as worry or fear, which can be mild or severe and is the main symptom of several mental disorders.

In the UK, for example, 57% of all working days lost to ill health were due to stress and anxiety in 2018. Its a similar story in the US, where its estimated that over half of all working days lost annually from absenteeism are stress-related, with the annual cost in 2013 alone equating to over $84 billion.

Stress and anxiety can also have a significant impact on an individuals physical health, affecting their work performance and productivity and causing further absenteeism. This form of poor mental health can impact physical health either directly through autonomic nervous system activity or indirectly as a result of unhealthy behaviors (e.g. poor diet, physical inactivity, alcohol abuse and smoking), increasing an individuals risk of developing cardiovascular problems.

It is therefore in an effort to break this chain, and in doing so save costs long-term, that employers are increasing their focus on establishing effective wellness programs, meaning any promotional activity or organizational policy that supports healthy behavior in the workplace and improves health outcomes. Corporate wellness programs nowadays include anything from healthy eating education, financial advice and access to weight loss and fitness programs, to more direct healthcare such as on-site medical screening, stress management, smoking cessation programs, and counseling services (in the form of employee assistance programs).

And this certainly can save costs long-term! Most famously, Johnson & Johnson leaders estimate that wellness programs have cumulatively saved the company $250 million on healthcare costs over the past decade; with a return of $2.71 for every dollar spent between 2002 and 2008. Its no surprise, then, that in 2020 the workplace wellness industry was estimated to be worth $48 billion globally.

Recent innovations in the space include the integration of wearable or smartphone technologies, used by employees to monitor and collect physical health data. These technologies provide employees with real-world physiological health insights to further incentivize participation in programs and increase and maintain their engagement. They simultaneously provide employers with an insight into the overall physical health of their workforce.

A golden opportunity to transform our relationship with mental health

However, with this most recent integration of digital health technologies comes a hitherto unrecognized opportunity to transform our understanding and treatment of mental health and wellbeing.

One of the primary barriers to delivering quality mental health care throughout history has been the difficulty in establishing accurate and objective methods to diagnose, assess and monitor treatment outcomes for psychological conditions. As was explained so eloquently by Washington University in November last year, if patients display symptoms of a heart attack, there are biological tests that can be run to look for diagnostic biomarkers that determine whether they are indeed suffering a heart attack or not. However, in the case of mental health disorders, the window by which we access the mind is still through psychological questioning, not biological parameters.

Mental health professionals screen, diagnose and monitor the symptoms and outcomes of patients through self-reported methods prone to excess subjectivity and therefore unreliability, such as diagnostic interviews and questionnaires. A patients self-reported symptoms are correlated with the ICD or DSM diagnostic manuals, yet challenges arise in the high heterogeneity of mental illnesses, low inter-rate reliability (i.e. poor agreement between clinicians diagnoses) and high comorbidity.

There is therefore a need to expand further than solely symptom-based to biology-based characterization of mental health conditions if we are to combat this unreliability and establish more evidence-based methods for diagnosis and monitoring, similar to our approach to physical illness.

So, how do we do this?

The National Institute for Mental Health for instance has already taken the first steps towards this with the RDoC (Research Domain Criteria). Advancements in MRI technology have also enabled research into understanding brain activity in certain depressive conditions.

But the most exciting development lies in the proliferation of wearable and smartphone health monitoring technologies. As the ability to collect vast amounts of physiological health data becomes more and more ubiquitous, the opportunity to utilize machine learning (ML) to extract new insights into the physiology of each individual grows larger.

With this comes the chance to uncover and establish personalized digital biomarkers for mental health conditions; described as indicators of mental state that can be derived through a patients use of a digital technology. These digital biomarkers can cover physiology (e.g. heart rate), cognition (e.g. eye movement on screens), behavioral (e.g. via GPS) and social (e.g. call frequency) factors. However, it is physiology that concerns us here.

Corporate wellness programs provide the perfect environment to explore the use of wearables and smartphone sensors in uncovering digital biomarkers which link physical health to mental wellbeing due to the huge potential benefits for all parties involved; employers and employees.

For example, by validating elements of cardiopulmonary functions as a digital biomarker for excess stress or anxiety disorders (a relationship for which some empirical evidence already exists), employers can not only identify stress and anxiety risks in the workplace and intervene earlier to protect employee mental wellbeing, but also establish an evidence-based approach for evaluating the effectiveness of workplace wellness initiatives. This is due to the fact that quantitative cardiopulmonary data would serve as a reliable measure of employee stress and/or mental wellbeing.

Employees, on the other hand, are empowered with insight into direct correlations between how they feel and their physical health. Therefore their increased engagement in wellness programs will improve their efficacy in preventing the deterioration of their mental health. For this reason, accessibility and ease-of-use must remain top of mind when choosing health monitoring technologies.

Finally, establishing digital biomarkers which correlate physiological parameters with mental health and wellbeing not only has the potential to provide more reliable tools for guiding diagnosis and evaluating patient outcomes but will also improve our understanding of the pathophysiology of mental disorders, in turn allowing for more effective preventive measures.

Photo: Creativeye99, Getty Images

Read more:
Quantifying stress & anxiety: Why corporate wellness programs will play a pivotal role in this paradigm shift - MedCity News

Compound may prevent risk of form of arrhythmia from common medications – Washington University in St. Louis Newsroom

Dozens of commonly used drugs, including antibiotics, anti-nausea and anticancer medications, have a potential side effect of lengthening the electrical event that triggers contraction, creating an irregular heartbeat, or cardiac arrhythmia called acquired Long QT syndrome. While safe in their current dosages, some of these drugs may have a more therapeutic benefit at higher doses, but are limited by the risk of arrhythmia.

Through both computational and experimental validation, a multi-institutional team of researchers has identified a compound that prevents the lengthening of the hearts electrical event, or action potential, resulting in a major step toward safer use and expanded therapeutic efficacy of these medications when taken in combination.

The team found that the compound, named C28, not only prevents or reverses the negative physiological effects on the action potential, but also does not cause any change on the normal action potential when used alone at the same concentrations. The results, found through rational drug design, were published online in Proceedings of the National Academy of Sciences (PNAS) on May 14.

The research team was led by Jianmin Cui, professor of biomedical engineering in the McKelvey School of Engineering at Washington University in St. Louis; Ira Cohen, MD, PhD, Distinguished Professor of Physiology and Biophysics, professor of medicine and director of the Institute for Molecular Cardiology at the Renaissance School of Medicine at Stony Brook University; and Xiaoqin Zou, professor of physics, biochemistry and a member of the Dalton Cardiovascular Research Center and Institute for Data Science and Informatics at the University of Missouri.

The drugs in question, as well as several that have been pulled from the market, cause a prolongation of the QT interval of the heartbeat, known as acquired Long QT Syndrome, that predisposes patients to cardiac arrhythmia and sudden death. In rare cases, Long QT also can be caused by specific mutations in genes that code for ion channel proteins, which conduct the ionic currents to generate the action potential.

Although there are several types of ion channels in the heart, a change in one or more of them may lead to this arrhythmia, which contributes to about 200,000 to 300,000 sudden deaths a year, more deaths than from stroke, lung cancer or breast cancer.

The team selected a specific target, IKs, for this work because it is one of the two potassium channels that are activated during the action potential: IKr (rapid) and IKs (slow).

The rapid one plays a major role in the action potential, said Cohen, one of the worlds top electrophysiologists. If you block it, Long QT results, and you get a long action potential. IKs is very slow and contributes much less to the normal action potential duration.

It was this difference in roles that suggested that increasing IKs might not significantly affect normal electrical activity but could shorten a prolonged action potential.

Cui, an internationally renowned expert on ion channels, and the team wanted to determine if the prolongation of the QT interval could be prevented by compensating for the change in current and inducing the Long QT Syndrome by enhancing IKs. They identified a site on the voltage-sensing domain of the IKs potassium ion channel that could be accessed by small molecules.

Zou, an internationally recognized expert who specializes in developing new and efficient algorithms for predicting protein interactions, and the team used the atomic structure of the KCNQ1 unit of the IKs channel protein to computationally screen a library of a quarter of a million small compounds that targeted this voltage-sensing domain of the KCNQ1 protein unit. To do this, they developed software called MDock to test the interaction of small compounds with a specific protein in silico, or computationally.

By identifying the geometric and chemical traits of the small compounds, they can find the one that fits into the protein sort of a high-tech, 3D jigsaw puzzle. While it sounds simple, the process is quite complicated as it involves charge interactions, hydrogen bonding and other physicochemical interactions of both the protein and the small compound.

We know the problems, and the way to make great progress is to identify the weaknesses and challenges and fix them, Zou said. We know the functional and structural details of the protein, so we can use an algorithm to dock each molecule onto the protein at the atomic level.

One by one, Zou and her lab docked the potential compounds with the protein KCNQ1 and compared the binding energy of each one. They selected about 50 candidates with very negative, or tight, binding energies.

Cui and his lab then identified C28 using experiments out of the 50 candidates identified in silico by Zous lab. They validated the docking results by measuring the shift of voltage-dependent activation of the IKs channel at various concentrations of C28 to confirm that C28 indeed enhances the IKs channel function. They also studied a series of genetically modified IKs channels to reveal the binding of C28 to the site for the in silico screening.

Cohen and his lab tested the C28 compound in ventricular myocytes from a small mammal model that expresses the same IKs channel as humans. They found that C28 could prevent or reverse the drug-induced prolongation of the electrical signals across the cardiac cell membrane and minimally affected the normal action potentials at the same dosage. They also determined that there were no significant effects on atrial muscle cells, an important control for the drugs potential use.

We are very excited about this, Cohen said. In many of these medications, there is a concentration of the drug that is acceptable, and at higher doses, it becomes dangerous. If C28 can eliminate the danger of inducing Q-T prolongation, then these drugs can be used at higher concentrations, and in many cases, they can become more therapeutic.

While the compound needs additional verification and testing, the researchers say there is tremendous potential for this compound or others like it and could help to convert second-line drugs into first-line drugs and return others to the market. With assistance from the Washington University Office of Technology Management, they have patented the compound, and Cui has founded a startup company, VivoCor, to continue to work on the compound and others like it as potential drug candidates.

The work was accelerated by a Leadership and Entrepreneurial Acceleration Program (LEAP) Inventor Challenge grant Washington University in St. Louis in 2018 funded by the Office of Technology Management, the Institute of Clinical and Translational Sciences, the Center for Drug Discovery, the Center for Research Innovation in Biotechnology, and the Skandalaris Center for Interdisciplinary Innovation and Entrepreneurship.

This work was done by an effective drug design approach: identifying a critical site in the ion channel based on understanding of structure-function relation, using insilico dockingto identify compounds that interact with the critical site in the ion channel, validating functional modulation of the ion channel by the compound, and demonstrating therapeutic potential in cardiac myocytes, Zou said. Our three labs form a great team, and without any of them, this would not be possible.

The McKelvey School of Engineering at Washington University in St. Louis promotes independent inquiry and education with an emphasis on scientific excellence, innovation and collaboration without boundaries. McKelvey Engineering has top-ranked research and graduate programs across departments, particularly in biomedical engineering, environmental engineering and computing, and has one of the most selective undergraduate programs in the country. With 140 full-time faculty, 1,387 undergraduate students, 1,448 graduate students and 21,000 living alumni, we are working to solve some of societys greatest challenges; to prepare students to become leaders and innovate throughout their careers; and to be a catalyst of economic development for the St. Louis region and beyond.

Lin Y, Grinter S, Lu Z, Xu X, Wang H Z, Liang H, Hou P, Gao J, Clausen C, Shi J, Zhao W, Ma Z, Liu Y, White, K M, Zhao L, Kang P W, Zhang G, Cohen I, Zou X, Cui J. Modulating the voltage sensor of a cardiac potassium channel shows antiarrhythmic effects. Proceedings of the National Academy of Sciences (PNAS), date, DOI.

This research was supported by grants from the National Institutes of Health (R01 HL126774, R01 DK108989, R01 GM109980, R35GM136409; the American Heart Association (13GRNT16990076). The computations were performed on the high-performance computing infrastructure supported by NSF CNS-1429294 and the HPC resources supported by the University of Missouri Bioinformatics Consortium (UMBC).

Authors Jianmin Cui and Jingyi Shi are cofounders of a startup company, VivoCor LLC, which is targeting IKs for the treatment of cardiac arrhythmia.

See more here:
Compound may prevent risk of form of arrhythmia from common medications - Washington University in St. Louis Newsroom

Why we find the sound of our voice cringeworthy – Scroll.in

As a surgeon who specialises in treating patients with voice problems, I routinely record my patients speaking. For me, these recordings are incredibly valuable. They allow me to track slight changes in their voices from visit to visit, and it helps confirm whether surgery or voice therapy led to improvements.

Yet I am surprised by how difficult these sessions can be for my patients. Many become visibly uncomfortable upon hearing their voice played back to them.

Do I really sound like that? they wonder, wincing.

(Yes, you do.)

Some become so unsettled they refuse outright to listen to the recording much less go over the subtle changes I want to highlight.

The discomfort we have over hearing our voices in audio recordings is probably due to a mix of physiology and psychology.

For one, the sound from an audio recording is transmitted differently to your brain than the sound generated when you speak.

When listening to a recording of your voice, the sound travels through the air and into your ears what is referred to as air conduction. The sound energy vibrates the ear drum and small ear bones. These bones then transmit the sound vibrations to the cochlea, which stimulates nerve axons that send the auditory signal to the brain.

However, when you speak, the sound from your voice reaches the inner ear in a different way. While some of the sound is transmitted through air conduction, much of the sound is internally conducted directly through your skull bones. When you hear your own voice when you speak, it is due to a blend of both external and internal conduction and internal bone conduction appears to boost the lower frequencies.

For this reason, people generally perceive their voice as deeper and richer when they speak. The recorded voice, in comparison, can sound thinner and higher-pitched, which many find cringeworthy.

There is a second reason hearing a recording of your voice can be so disconcerting. It really is a new voice one that exposes a difference between your self-perception and reality. Because your voice is unique and an important component of self-identity, this mismatch can be jarring. Suddenly you realise other people have been hearing something else all along.

Even though we may actually sound more like our recorded voice to others, I think the reason so many of us squirm upon hearing it is not that the recorded voice is necessarily worse than our perceived voice. Instead, we are simply more used to hearing ourselves sound a certain way.

A study published in 2005 had patients with voice problems rate their own voices when presented with recordings of them. They also had clinicians rate the voices. The researchers found that patients, across the board, tended to more negatively rate the quality of their recorded voice compared with the objective assessments of clinicians.

So if the voice in your head castigates the voice coming out of a recording device, it is probably your inner critic overreacting and you are judging yourself a bit too harshly.

Neel Bhatt is an Assistant Professor of Otolaryngology, UW Medicine at the University of Washington.

This article first appeared on The Conversation.

Continue reading here:
Why we find the sound of our voice cringeworthy - Scroll.in