All posts by student

New translations – Duke Chronicle

Opinion | Column

the picture of health

When you think about the phrase medical research the image that probably enters your mind is a white-coated scientist, carefully pipetting chemicals, culturing cells or observing something under a microscope. For centuries, research in the basic scienceschemistry, physics and biologyproduced medicines most substantive advances. However, a shifting landscape of disease suggests that medicine will have to adopt discoveries from a broader array of disciplines, such as psychology and economics, if it is to effectively address the challenges of the future.

Translational research is the process of transitioning discoveries from bench to bedsideleveraging fundamental scientific discoveries into applicable treatments for patients. Take for example the development of cancer therapies. A fundamental understanding of cell division led to the discovery of agents that halt that process. These agents were then developed into medications that target rapidly dividing cancer cells. Similarly, research about human behavior and decision-making can be leveraged to create interventions to treat diseases driven by choices.

In industrialized nations, the landscape of disease is rapidly shifting. Due to medical advances in the treatment of infectious disease and traumatic injury, American mortality is increasingly driven by chronic, and often preventable, disease. In 1900, 23.1 percent of all deaths were attributed to pneumonia, influenza and tuberculosis and just 3.7 percent of deaths were attributed to cancer. However, in 2005, influenza and pneumonia accounted for just 2.6 percent of deaths while cancer contributed 22.8 percent. The discoveries of chemists, biologists and physicists have radically improved the survival of cancer patients. However, research from the behavioral sciences can help craft interventions that lower the risk of developing the disease at all.

Inducing behavioral change could have a major effect on health outcomes. In 2010, it was estimated that 42.7 percent of the cancers in Britain could be attributed to lifestyle factors, such as obesity, smoking and exercise. Psychological studies shed light on the most effective ways to elicit a lifestyle change. For example, motivational interviewing is a technique used to prompt and support patients in making change. This technique seeks to address ambivalence to change by eliciting patients own motivation. Using this approach to prompt behavioral changerather than confronting or persuading patientsresults in statistically significant improvements in health. Studies have shown that motivational interviewing results in increased weight loss and exercise, decreased viral load among HIV positive patients, and lower blood pressure and cholesterol values.

Another non-traditional discipline that is particularly suited to crafting health interventions is behavioral economics, which combines the fundamentals of economic theory with insights from psychology. Behavioral economics challenges the assumption that humans behave as fully informed and rational actors and instead understands decision-making as a process with predictable biases. For example, psychological and economic studies have shown that humans overvalue immediate rewards and undervalue delayed rewards. In health terms, this means that the immediate joy of a donut is overvalued against the amorphous increased risk of cardiovascular disease in the future. Similarly, the hassle of taking a medication every day may seem more onerous than the potential progression of future disease. Since these biases are predictable, interventions can be be designed to anticipate and counteract the bias, or use bias to predispose us to healthy, rather than unhealthy behavior.

Investigators at the University of Pennsylvania used findings from behavioral economics to create an intervention to increase adherence to warfarin, an anticoagulant that must be taken consistently to be effective. In the study, a machine recorded each time a dose of the medication was taken and gave the patient with an entry in a lottery with small prizes. The incentive increased adherence to medication, better than a simple reminder message. Given the fact that discontinuation of warfarin has major health risks, and patients who discontinue warfarin typically incur an additional $5,000 in annual healthcare expenditures, this intervention could have significant clinical implications.

Information from psychology about human decision makingand human errorcan also be used to help physicians and surgeons provide better care for patients. Industries such as aviation, manufacturing, and nuclear power, have long incorporated research about human error into their systems design. Their systems anticipate and respond to human mistakes, allowing for correction before an error becomes critical. These systems leave nothing to chance or fallible human memory. While medicine has been slower to adopt this mentality, reports suggesting as many as 98,000 die annually as the result of medical errors have spurred action. Simple, evidence-based interventions that anticipate human error can have a major impactimplementing a infection prevention checklist dropped rate of infections in Michigan ICUs by 66 percent.

While translational research has traditionally focused on the hard sciences, human behavior is an increasingly important factor driving morbidity and mortality. In order to address these important challenges, we must widen the focus of translational research to include research that specifically addresses decision-making and behavior. Interventions based on this research can help elicit behavioral change in patients and help protect patients from the inevitable fallibility of medical providers. Just as fundamental discoveries in the hard sciences lead to life saving advances, so too can the discoveries of behavioral sciences.

Lauren Groskaufmanis is a graduate student in the school of medicine. Her column, the picture of health, runs on alternate Fridays.

The Chronicle is your source for Duke news, sports, culture and dialogue.

Subscribe to the Chronicle: Newsletter | The Dirt | Overtime

See more here:
New translations - Duke Chronicle

New GOP bill lets companies force you to take genetic tests, lets them share results with third parties – ExtremeTech

A new bill introduced by Virginia Foxx (R-NC) and approved by the House Ways and Means Committee would allow corporations to force employees to undergo genetic testing and then share those results with third parties. In theory, this is already illegal, thanks to a 2008 law known as GINA. This type of behavior is also regulated by the Americans With Disabilities Act (ADA).

The new House bill, HR 1313, gets around these issues by preemptively declaring that workplace wellness programs offered in conjunction with an employers sponsored health care plan shall be considered to be in compliance with GINA, the ADA, and other workplace protections. Given that the relevant section of GINA (section 202(b)(2)) specifically states that it shall be unlawful for employers to gather genetic information on employees without the express permission and consent of the employee in question, the GOP just wrote a privacy-shredding exception into a bill and then quietly passed that bill through committee.

Workplace wellness programs have been controversial because they largely dont seem to work, but remain popular as a method of pushing healthcare costs on to employees. Historically, companies have been allowed to offer these programs (and to enforce fiscal penalties on employees that refuse to meet their goals). But HR 1313 goes farther than simply allowing genetic profiling of employees because an employer offers insurance coverage. The bill actually stipulates that any company with any program with a workplace wellness component can mandate genetic collection whether it provides insurance or not. It also states:

[T]he collection of information about the manifested disease or disorder of a family member shall not be considered an unlawful acquisition of genetic information with respect to another family member as part of a workplace wellness program. [emphasis added]

Under the GOPs bill, which has already passed through one committee vote with 22 Republicans voting for it and 17 Democrats against, it would be explicitly legal for companies to collect genetic information on your family members. Its also legal for them to share that information with third parties, in complete and total abrogation of the privacy protections passed in 2008.

The American Society for Human Genetics has blasted the bill:

H.R.1313 would effectively repeal these protections by allowing employers to ask employees invasive questions about their and their families health, including genetic tests they, their spouses, and their children may have undergone. GINAs requirement that employees genetic information collected through a workplace wellness program only be shared with health care professionals would no longer apply.

HR 1313 is a travesty. It guts previous protections passed by Congress intended to protect the most fundamentally personal information any human possesses their own genetic code. It would allow corporations to share that data with third parties for analysis without stripping it of identifying information (GINA forbids this, but 1313 supersedes GINA). It would allow companies to levy fines up to 30% of the cost of health premiums on the employees who fail to cooperate. The ASHG notes that the average premium cost for employees in 2016 was $18,142, meaning families could face an additional $5,443 in premium costs per year for refusing to hand over their genetic and health information.

View post:
New GOP bill lets companies force you to take genetic tests, lets them share results with third parties - ExtremeTech

Researchers discover intestinal quiescent stem cells that are resistant to chemotherapy – News-Medical.net

March 10, 2017 at 7:49 PM

The intestine has a high rate of cellular regeneration due to the wear and tear originated by its function degrading and absorbing nutrients and eliminating waste. The entire cell wall is renewed once a week approximately. This explains why the intestine holds a large number of stem cells in constant division, thereby producing new cell populations of the various types present in this organ.

Researchers at the Institute for Research in Biomedicine (IRB Barcelona) headed by ICREA investigator Eduard Batlle, head of the Colorectal Cancer Laboratory, have discovered a new group of intestinal stem cells with very different characteristics to those of the abundant and active stem cells already known in this organ. Performed in collaboration with the Centro Nacional de Anlisis Genmico (CNAG-CRG), the study has been published in Cell Stem Cell. These new group of stem cells are quiescent, that is to say, they do not proliferate and are apparently dormant.

The researchers describe them as a reservoir of stem cells--it is estimated that there is one quiescent cell for every 10 active intestinal stem cells. In healthy conditions, these cells have no apparent relevant function. However, they are important in situations of stress, , for example, after chemotherapy, in inflammatory processes, and in tissue infections--all conditions in which the population of "normal/active" stem cells is depleted. These quiescent cells would serve to regenerate the organ by giving rise to the various types of cells present in the intestine, renewing the population of "normal/active" stem cells, and restoring balance to the tissue.

Eduard Batlle explains that the discovery of quiescent stem cells in the intestine reveals that stem cell biology is more complex that previously appreciated and that it does not follow ahierarchical model of cell organisation. "In intestinal cell hierarchy, there are no cells above others, so the two populations are in a continual balance to ensure the proper function of the organ".

Most drugs against cancer have a secondary effect on the cells that are dividing in our tissues. "Because quiescent stem cells divide infrequently, they are resistant to many types of chemotherapy and they regenerate the tissue that this treatment has damaged," explains Eduard Batlle, head of one of the labs of international prestige in research into intestinal stem cells and their involvement in colorectal cancer.

Quiescent cells are present in many kinds of tissue. However, in spite of their relevance in tissue regeneration, increasing evidence points to their involvement in tumour development. "It is difficult to study these cells, mainly because they are scarce and there are technical limitations with respect to monitoring, straining and distinguishing them from the others," explains Francisco Barriga, first author of the study and current postdoctoral fellow at the Memorial Sloan Kettering Cancer Center in New York.

Using advanced techniques, such as genetic tracing of cell lineages and transcriptomic analysis of individual cells, performed by CNAG-CRG and the Bioinformatics and Biostatistics Unit at IRB Barcelona, the group has identified the distinct genetic programme used by quiescent stem cells with respect to normal intestinal ones. This work has been done over six years.

The researchers have labelled this cell population with a specific marker, the Mex3a protein, which has allowed them to track it over time. "We intend to continue studying quiescent stem cells in health and disease and to discover the function of the genes that distinguish them in the colon and in other organs," says Batlle.

Read more from the original source:
Researchers discover intestinal quiescent stem cells that are resistant to chemotherapy - News-Medical.net

Grey’s Anatomy’s Kevin McKidd Has a Grim Warning About Owen … – E! Online

Could Owen Hunt be on his way to his second divorce on Grey's Anatomy?

When season 13 began, we had such high hopes for newlyweds Owen and Amelia, but like all hope in Shondaland, it quickly dissipated to the point that they aren't even sleeping under the same roof anymore, let alone speaking. And as Kevin McKidd tells us, this very well could be the end of the road for the couple.

"It's a hard one because she's got all these demons. He does too. And now they've hit against this big issue of the baby. Owen has always imagined having a family, and now she seems to be changing her view on that. So that's going to be a big issue for them," the actor told E! News during a recent visit to the Grey's set. "I'll be interested to see what happens, but, at the moment, it's not looking good. I have to say, it's not looking good. But sometimes that's what's so interesting about the show and I think what's clever about the show is that it looks like the story's pulling you in one direction and one thing will happen and it will change everything."

ABC

But before you give up on Owen and Amelia completely, know that McKidd isn't ready to throw in the towel just yet. "I've got a feeling that Amelia's going to sort of come to Owen's rescue somehow," he admitted. "I don't know why I think that. It's just a gut feeling I have."

Whatever happens, look for some movement on that front beginning with tonight's episode, when Amelia (Caterina Scorsone) finally faces her feelings about her estranged husband.

Speaking of estrangement, when we sat down with McKidd we couldn't resist the opportunity to test him on the fan theory out there that his presumed dead sister Meganwho we met in this season's flashback-laden episode "The Room Where It Happens," where she was played by Bridget Reganisn't actually all that dead. After all, this is Grey's. You don't usually hear about a family member if they're not going to make their way to Grey Sloan Memorial in some way,shape or form.

So, could McKidd shed any light on the theory? After a long pause wherein he seemed to be very carefully crafting his response, he said,"I can't. Listen, on any ABC Shondaland show, there's always a maybe to everything. Anything can happen. I've got to say, the actress who played her in the flashback episode is brilliant and we had great chemistry and we got along really well. So, if that happened, I'd be very delighted about it."

For more from McKidd, including why he's hoping for a visit from former co-star Sandra Oh, be sure to check out the video above.

Are you still holding out hope for Owen and Amelia? And do you buy into the theory that Megan just might be alive? Share your thoughts in the comments below!

Grey's Anatomy airs Thursdays at 8 p.m. on ABC.

E! Online - Your source for entertainment news, celebrities, celeb news, and celebrity gossip. Check out the hottest fashion, photos, movies and TV shows!

Visit link:
Grey's Anatomy's Kevin McKidd Has a Grim Warning About Owen ... - E! Online

How an Atari Chip Set Off a War Among Neuroscientists – WIRED

Slide: 1 / of 1. Caption: Getty Images

This January, a video game chip started a scientific reckoning. It all began when some microchip archaeologists photographed the chipthe MOS 6502 microprocessor that lived inside Atariand built a digital model of its interconnections. Then some neuroscientists put it to the test. One by one, they knocked out the transistors in their map, trying to get at what the circuit was for. Its similar to what neuroscientists do when they lesion a part of the brain, or silence single neurons. Their project was simple: Could they use the arsenal of neuroscience methods to get at the function of a simple circuit?

They failed. Miserably. The scientists experiments didnt produce much information about Donkey Kong, Space Invaders, or Pitfalljust which transistors you could knock out and turn the game off. The result was damning for researchers pursuing the connectome, a bottom-up recreation of all the brains interconnections. To the neuroscience community, the message was clear: Brain scientists may have plenty of bottom-up data about the brain, but theyre far from using that data to understand how the organ works. For all of these approaches, we havent really thought through how to ultimately get at an understanding of the brain based on the data were getting, says Konrad Kording, a neuroscientist at Northwestern and one of the studys authors.

But even Kording thinks theres hope for the future of the connectome. It just isnt quite what people think.

To get a connectome, neuroscientists bounce beams of electrons off of neural tissue, creating nanoscale images of cell membranes and organellesin some cases, even little bubbles of neurotransmitters. Then they can trace the long, thin axons and dendrites, ultimately building a map of all their interconnections.

And just having the map isnt enough. For the microprocessor example, we know exactly how transistors behave, and we can simulate them, says Shawn Mikula, a neuroscientist at the Max Planck Institute for Neurobiology in Germany working toward a whole-brain connectome of the mouse. But for the cellular connectome, we dont know the individual properties of neurons very well. Neurons have complex electrical properties, and synapses that can be active or silent. They can also release different neurotransmitters onto many different types of receptors. Neurons, Mikula says, are worlds unto themselves.

But even if they could collect the data about all of that variationand both Mikula and Kording are skepticalits a big jump from a simulation to understanding the brain. Kording says that if he could give a colleague a hard disc with the whole human connectome on it right now, they wouldnt know what to do with it. The shocking thing is that even the brightest people I know in neuroscience just say, Well, someones going to figure it out. Neuroscientists want to figure out how neurons influence each other, gaining a broader understanding of how the brain computes. But his microprocessor paper suggests that, at least with the tools neuroscientists have now, that wouldnt be possible.

Mikulaalong with many in the fieldis less fatalistic. He actually uses the Atari microprocessor in his talks as an example of how bottom-up structure can be used to predict function. Because in the paper that was exactly what they did, Mikula says. They had a circuit structure and they ran simulations on it to determine the function. The paper was actually proving the point of connectomics. The functions were simpleon/off switches and clocksbut they still figured them out.

Just like Kordings experiments didnt get all the way to understanding the microprocessor, a connectome might not give you a perfect simulation of the brain. But it could simplify certain research problems in neuroscience. A neural map could constrain the problem of where a piece of information could go if, say, it came in to your brain through your eyesletting researchers trace the path through the maze of the brain. In Mikulas view, the connectome is a tool for asking questions about the brain, rather than the answer to the question of how the brain works.

The connectome could also clear up one area of neuroscience that desperately needs some structure: neuroanatomy. Classically, neuroscientists have used traces of different chemicals to parcel the brain out into little modules, like the dopamine-rich substantia nigra, which works to provide rewards. That way of thinking works pretty well for some brain areas, Mikula says, but when you get to more complex processing regions it starts to get a bit fuzzier. The connectome could fundamentally change how scientists talk about the brains structure. You can see whether it actually makes senses to talk about having discrete processing modules or discrete areas in the brain, says Mikula.

To really tap the potential of the connectome-as-tool, though, neuroscientists will need to figure out how to build them faster. Comparing multiple connectomes could help explain disease pathology or between-species differences in brainsbut right now, building just a single connectome is a huge endeavor. The most successful method involves imaging the top of a cube of brain, then slicing off a very thin layer with a diamond-tipped blade and imaging the next layer in the cube. The slices are destroyed in the process, so you only have one shot to get the images.

Mikula is one neuroscientist working to build connectomes faster. Hes developing a whole mouse brain on tape with software that can randomly access different parts of the brain and image at different resolutions. Neuroscientists could target certain parts of the brain at lower resolution and target high-priority spots for high-res imaging later. Eventually, it might be possible to label different chemicals in the cells on the tape as well to get at the functional differences between different types of cells. Mikula may never get to a whole-brain simulationbut that wont stop him from trying.

Read more here:
How an Atari Chip Set Off a War Among Neuroscientists - WIRED

Historic Lotamore House is rejuvenated as a fertility clinic – Irish Examiner

A multi-million euro private medical investment at Corks historic Lotamore House has just come to completion, after an 18-month-plus gestation.

Lotamore House, in Tivoli, Cork, which has been transformed to become the Waterstone Clinic.

Now set to employ 55, the Waterstone Clinic (previously known as the Cork Fertility Centre,) has just completed as a 13,000sq ft centre of excellence at the 210-year old Lotamore House in Cork.

The classical, villa-style building was sold in 2013 to Dr John and Susan Waterstone for an unconfirmed 800,000 having had a recently chequered past in previous ownerships.

It was controversially and briefly occupied by a protest group the Rodolphus Allen Private Family Trust after the property was taken over by receivers Deloitte from previous private owner, Sidney McElhinney, who had plans to turn it into a 90-bed nursing home.

Lotamore House had previously sold for over 3m, on 11 acres by the Tivoli dual carriageway, and other previous uses of the grande era villa included offices for a computer firm, as well as being offices for Irish Sweepstakes in the mid-1900s.

It had operated too as a luxury guesthouse for many years, hosting judges on the circuit, among other guests.

It featured on TV news during the brief-lived occupation until garda moved a caravan off its ground.

And, a proposal to document Lotamore Houses transition to 21st century fertility clinic was pitched to RT by a production company, GoodLookingFilms, but the broadcaster didnt commission the series which promised to mix medical science and embryo technology with Grand Designs.

Private family owners included the Hacketts, the Ronayne Mahonys, the Cudmores, the Lunhams and the Huguenot merchant family, the Perriers.

Now, claiming to be the most advanced fertility unit in the country, Lotamore is set to play a role in creating new families, out of a building with three centuries of Cork history.

The Waterstone Clinic previously operated in College Road, Cork, with clinic also in Waterford, Limerick and Dublin, on Leeson St.

At Lotamore, it has grown its lab space five fold to 1,500 sq ft of high tech lab with with the latest embryology technology and the building also accommodates five scan rooms (up from two), five consultation rooms, five recovery rooms, three masterbatorium, two theatres, a reception etc.

Procedures are on a day-visit basis, with no overnight facilities.

Lotamore House is a historic 18th century Cork building, and we have sympathetically refurbished and restored it, preserving its fine period details while incorporating modern facilities and comforts.

We have endeavoured to make a visit to Lotamore House as stress-free as possible for patients, with generous parking, spacious waiting areas and an interior design that maximises privacy, said founder Dr John Waterstone, who will host Lotamores first seminar post-opening on March 23.

Irish Examiner Ltd. All rights reserved

Originally posted here:
Historic Lotamore House is rejuvenated as a fertility clinic - Irish Examiner

Should Naturalism Define Science? (RJS) – Patheos (blog)

Methodological naturalism. For most scientists this is a foregone conclusion; a scientist studies nature and looks for natural cause and effect. Among Christians the term is often viewed as a cop-out, giving away the farm by ruling divine action out of bounds. Many atheists view the term as indicative of a failure to face facts and admit that there is nothing but the natural world. Which view is closest to yours?

Jim Stump, in his recent book Science and Christianity: An Introduction to the Issues, digs into the concept of methodological naturalism. His first point (as a good philosopher) is that methodological naturalism is not an easy concept to define. Well, methodological isnt terribly hard to grasp. Methodological is contrasted with metaphysical or ontological naturalism. The emphasis in methodological naturalism is on the method of doing science rather than on the existence or nonexistence of anything beyond the natural world. All scientists can approach their work as methodological naturalists no matter what views they hold concerning the ultimate shape of reality Christians, atheist, Hindu, Buddhist, or whatever. For Jim, the hard term is natural. What counts or doesnt count as natural? Most definitions are, or seem, circular. Natural phenomena are those that are investigated by natural means obeying natural laws.

The trouble with adopting methodological naturalism it that it seems we have to predetermine what counts as natural. And that will inescapably involve metaphysical notions and values that are not properly scientific by the standards of methodological naturalism.In that case, our metaphysics is going to affect our science, so long as we are committed to science as explanatory. (p. 71-72)

Commenters on this blog have occasionally suggested that methodological naturalism is metaphysical naturalism in disguise because it simply rules out everything else. Certainly some who favor intelligent design feel this danger. Lets not worry about defining natural at this time and move on to look at the nature and practice of methodological naturalism.

Practice of Science. It is relatively easy to see where the practice of chemistry and physics; geology and agriculture; genetics and embryology along with many other disciplines and subdisciplines can be approached through the lens of methodological naturalism. We look for and confine ourselves to the study of the interactions between atoms and molecules, even subatomic particles, the interaction of light and gravity with matter, and the laws that describe these interactions.

Problems may arise when scientists in these fields look to grand unifying theories. Jim brings some of Alvin Plantingas work into the discussion.

There is something to be said for recognizing disciplinary boundaries. Michael Ruse compares methodological naturalism to going to a doctor and expecting not to be given any political advice. The doctor may have very strong political views, but it would be inappropriate for him or her to disseminate them in that context. So, too, the scientist ought not to disseminate religious views, as they are not relevant to the task at hand. But Plantinga counters that in assessing grand scientific theories we will necessarily cross disciplinary lines in order to use all that we know that is relevant to the question. For the Christian, he thinks this properly allows the use of biblical revelation in assessing whether something like the theory of common ancestry is a correct explanation. And he believes that can be called Augustinian, or theistic, science. (p. 76)

In part this is because, historically speaking, what counts as natural is a moving target. I think Plantinga has an important point concerning grand theories but (big but) he is completely off-base in applying his concern to the question of common ancestry. Evolution and common descent are natural scientific questions with methodological naturalism an appropriate approach even for devout Christians. Before digging a little deeper into places where methodological naturalism should be held lightly we will look briefly at reasons for retaining methodological naturalism.

Retaining Methodological Naturalism. In the natural sciences (biology, chemistry, physics, geology, astronomy, climatology, meteorology ) reasons given for abandoning methodological naturalism are always gap arguments. Jim does not put it quite this bluntly, but after reading quite widely, this is the clear conclusion. I have not yet found an argument that is not based on a possibly temporary state of ignorance. Protestations to the contrary are emotional rather than evidence based.

Inserting supernatural agency or events into explanations has a fairly poor track record historically. Science has been remarkably successful at figuring out the causes of phenomena that were once explained by supernatural agents from thunder and solar eclipses, to disease and epilepsy. Of course that doesnt mean that science will be able to figure out everything in the future. But it should give us pause before thinking weve found some phenomenon for which there will never be any scientific explanation. To do otherwise would be to inhibit scientific investigation. Take the example of how the first living cell came about. Scientists dont have very promising models right now for how that could have happened through natural means. (p. 77-78)

Both Alvin Pantinga (Jim cites a couple of articles written in 1996 and 1997) and Stephen Meyer (Signature in the Cell) suggest that this should allow us to draw the conclusion that the best explanation is that here we have a place where God acted as an intelligent agent. Jim notes But should we call it the best scientific explanation we have at present if we say and then a miracle happened and there was life? It seems more in keeping with our present usage to say, At present we have no scientific explanation for that phenomenon. (p. 78) To insert a supernatural act of God here is to insert God into a gap in our knowledge. If the gap fills where is God? Of course God is responsible for the origin of life, just as he is responsible for the weather and the formation of a babe in the womb; but it isnt either God or science. It is God and science. As a Christian I am convinced that as scientists we study Gods ordained and sustained creation. Perhaps there are places where there will never be a satisfactory scientific explanation, but it is unwise to draw this conclusion about any individual proposal.

When is methodological naturalism troublesome? Here I leave Jims chapter and give my own view. Methodological naturalism is troublesome when we step away from the impersonal (chemistry, physics, ) and move to the personal. If there is a God who interacts with his creation methodological naturalism will give the wrong result in these instances.

Methodological naturalism applied to the study of history will guarantee that we never find God active in history. Methodological naturalism would require us to accept that dead people never come back to life without some yet unknown scientific mechanism for rejuvenation. Methodological naturalism would require us to propose a natural explanation for every act of Jesus from walking on water to stilling the storm, healing the lame, blind, and deaf, and feeding the multitudes. For many the natural explanation is that these never happened they are tall tales. But, the incarnation is a very personal act. If the Christian God exists, methodological naturalism wont get to the truth. N. T, Wright makes this argument in his book The Resurrection of the Son of God. If we dont eliminate the possibility of resurrection, then The Resurrection makes good sense. Many scholars today, of course, simply eliminate the possibility and look for natural explanations.

I will suggest that another place where methodological naturalism fails is in some areas of the social sciences. Humans and human social constructs are shaped by interpersonal interactions. The plasticity of the human brain means that we are shaped and formed not only by nature i.e. our genes, but also through community our social environment. Ideas change people. If there is a God who interacts with his people, his presence and interaction will change people. Natural explanations, ignoring the supernatural, i.e. God, will never get to the complete truth. Here is a case where the a priori move to eliminate God from consideration will limit understanding if there is a God. This isnt miraculous, but neither is it natural because God isnt natural.

Methodological naturalism is troublesome when it shapes our grand theories of being. However, it is counterproductive, and can be destructive to faith, to insist on gaps in impersonal processes and insert divine as opposed to natural cause.

What do you think?

Is methodological naturalism a useful approach?

What are the limits, if any, to methodological naturalism?

If you wish to contact me directly you may do so at rjs4mail[at]att.net

If interested you can subscribe to a full text feed of my posts at Musings on Science and Theology.

See the original post:
Should Naturalism Define Science? (RJS) - Patheos (blog)

UG/PG admission begins at AMU, Aligarh: Check out the details – India Today

The Aligarh Muslim University (AMU), Aligarh has released an admission notification inviting applications from interested, eligible candidates to apply for admission to its various programmes offered under various specialisations for the academic session 2017.

BA programme: The candidates interested in applying for this programme should have passed senior secondary school or equivalent examination with at least 50 per cent marks in aggregate with English and three subjects from -- accountancy, Arabic, banking, biology, biotechnology, business organisation, business studies, chemistry, commerce, computer science, economics, education, English, fine arts, geography, Hindi, history, home science, islamic studies, mathematics, Persian, philosophy, physical health education, physics, political science/civics, psychology, Sanskrit, sociology, statistics, Urdu and modern Indian languages (Bengali, Tamil, Telugu, Malayalam, Marathi, Punjabi and Kashmiri).

MTech programme: The candidates interested in applying for this programme should have pursued BTech or its equivalent examination, in the relevant branch of study, with not less than 60 per cent marks in aggregate or its equivalent CPI/CGPA/NAG.

MSc programme: The candidates interested in applying for this programme should have pursued BSc with biochemistry/biosciences/life sciences/medical biochemistry/clinical biochemistry as main, with two of the following subsidiary subjects: zoology/botany/ chemistry/biotechnology or BSc with biochemistry/biosciences/clinical biochemistry/ medical biochemistry, as one of the subjects of equal value along with any two of the optional subjects i.e. zoology, botany/chemistry/biotechnology.

(Read: Indian Statistical Institute Admissions 2017: Apply latest by March 10)

The candidates will be selected on the basis of departmental test conducted by the university.

The candidates are required to apply at the official website.

The last date of submissions of online application form for MSc (agriculture)/LLM programme is April 10.

The last date of submissions of online application form for MBBS/BDS programme is June 15.

The last date of submissions of online application form for MA/MTech/Mcom programmes is April 17.

The last date of submissions of online application form for LLM/BRTT/MSc programme is April 18.

The last date of submissions of online application form for MA/BFA programmes is April 12.

The last date of submissions of online application form for BA (Hons)/MPEd programmes is April 19.

Read: NISER, Bhubaneswar admissions 2017: Apply for PhD courses

For information on more courses and admissions,click here.

Read more:
UG/PG admission begins at AMU, Aligarh: Check out the details - India Today

Transcrypt: Anatomy of a Python to JavaScript Compiler – InfoQ.com

Key Takeaways

Featuring a diversity of programming languages, backend technology offers the right tool for any kind of job. At the frontend, however, it's one size fits all: JavaScript. Someone with only a hammer will have to treat anything like a nail. One attempt to break open this restricted world is represented by the growing set of source to source compilers that target JavaScript. Such compilers are available for languages as diverse as Scala, C++, Ruby, and Python. The Transcrypt Python to JavaScript compiler is a relatively new open source project, aiming at executing Python 3.6 at JavaScript speed, with comparable file sizes.

For a tool like this to offer an attractive alternative to everyday web development in JavaScript, at least the following three demands have to be met:

To be successful, all aspects of these three requirements have to be met. Different compilers strike a different balance between them, but no viable compiler for every day production use can neglect any of them. For Transcrypt, each of the above three points has led to certain design decisions.

Demand 1:

Look and feel of web sites and web applications are directly connected to the underlying JavaScript libraries used, so to have exactly the same look and feel, a site or application should use exactly the same libraries.

Although fast connections may hide the differences, achieving the same page load time, even on mobile devices running on public networks, mandates having roughly the same code size. This rules out downloading a compiler, virtual machine or large runtime at each new page load.

Achieving the same startup time as pages utilizing native JavaScript is only possible if the code is statically precompiled to JavaScript on the server. The larger the amount of code needed for a certain page, the more obvious the difference becomes.

To have the same sustained speed, the generated JavaScript must be efficient. Since JavaScript virtual machines are highly optimized for common coding patterns, the generated JavaScript should be similar to handwritten JavaScript, rather than emulating a stack machine or any other low level abstraction.

Demand 2:

To allow seamless access to any JavaScript library, Python and JavaScript have to use unified data formats, a unified calling model, and a unified object model. The latter requires the JavaScript prototype based single inheritance mechanism to somehow gel with Pythons class based multiple inheritance. Note that the recent addition of the keyword 'class' to JavaScript has no impact on the need to bridge this fundamental difference.

To enable efficient debugging, things like setting breakpoints and single stepping through code have to be done on the source level. In other words: source maps are necessary. Whenever a problem is encountered it must be possible to inspect and comprehend the generated JavaScript to pinpoint exactly what's going on. To this end, the generated JavaScript should be isomorphic to the Python source code.

The ability to capitalize on existing skills means that the source code has to be pure Python, not some syntactic variation. A robust way to achieve this is to use Python's native parser. The same holds for semantics, a requirement that poses practical problems and requires introduction of compiler directives to maintain runtime efficiency.

Demand 3:

Continuity is needed to protect investments in client side Python code, requiring continued availability of client side Python compilers with both good conformance and good performance. Striking the right balance between these two is the most critical part of designing a compiler.

Continued availability of trained Python developers is sufficiently warranted by the fact that Python has been the number 1 language taught in introductory computer science courses for three consecutive years now. On the backend it is used for every conceivable branch of computing. All these developers, used to designing large, long lived systems rather than insulated, short lived pieces of frontend script code, become available to browser programming if it is done in Python.

With regard to productivity, many developers that have made the switch from a different programming language to Python agree that it has significantly increased their output while retaining runtime performance. The latter is due to the fact that libraries used by Python applications for time critical operations like numerical processing and 3D graphics usually compile to native machine code.

The last point openness to changed needs means that modularity and flexibility have to be supported at every level. The presence, right from the start, of class-based OO with multiple inheritance and a sophisticated module and package mechanism has contributed to this. In addition, the possibility to use named and default parameters allows developers to change call signatures in a late stage without breaking existing code.

Conformance versus performance: language convergence to the rescue

Many Python constructs closely match JavaScript constructs, especially when translating to newer versions of JavaScript. There's a clear convergence between both languages. Specifically, more and more elements of Python make their way into JavaScript: for ... of ..., classes (in a limited form), modules, destructuring assignment and argument spreading. Since constructs like for ... of ... are highly optimized on modern JavaScript virtual machines, it's advantageous to translate such Python constructs to closely matching JavaScript constructs. Such isomorphic translation will result in code that can benefit from optimizations in the target language. It will also result in JavaScript code that is easy to read and debug.

Although with Transcrypt, through the presence of source maps, most debugging will take place stepping through Python rather than JavaScript code, a tool should not conceal but rather reveal the underlying technology, granting developer full access to 'what's actually going on'. This is even more desirable since native JavaScript code can be inserted at any point in the Python source, using a compiler directive.

The isomorphism between Python and the JavaScript code generated by Transcrypt is illustrated by the following fragment using multiple inheritance.

translates to:

Striving for isomorphic translation has limitations, rooted in subtle, but sometimes hard to overcome differences between the two languages. Whereas Python allows lists to be concatenated with the + operator, isomorphic use of this operator in JavaScript result in both lists being converted to strings and then glued together. Of course a + b could be translated to __add__ (a, b), but since the type of a and b is determined at runtime, this would result in a function call and dynamic type inspection code being generated for something as simple as 1 + 1, resulting in bad performance for computations in inner loops. Another example is Python's interpretation of 'truthyness'. The boolean value of an empty list is True (or rather: true) in JavaScript and False in Python. Dealing with this globally in an application would require every if-statement to feature a conversion, since in the Python construct if a: it cannot be predicted whether a holds a boolean or somthing else like a list So if a: would have to be translated to if( __istrue__ (a)), again resulting in slow performance if used in inner loops.

In Transcrypt, compiler directives embedded in the code (pragmas) are used control compilation of such constructs locally. This enables writing matrix computations using standard mathematics notation like M4 = (M1 + M2) * M3, at the same time not generating any overhead for something like perimeter = 2 * pi * radius. Syntactically, pragma's are just calls to the __pragma__ function, executed compile time rather than run time. Importing a stub module containing def __pragma__ (directive, parameters): pass allows this code to run on CPython as well, without modification. Alternatively, pragmas can be placed in comments.

Unifying the type system while avoiding name clashes

Another fundamental design choice for Transcrypt was to unify the Python and the JavaScript type system, rather than have them live next to each other, converting between them on the fly. Data conversion costs time and increases target code size as well as memory use. It burdens the garbage collector and makes interaction between Python code and JavaScript libraries cumbersome.

So the decision was made to embrace the JavaScript world, rather than to create a parallel universe. A simple example of this is the following code using the Plotly.js library:

Apart from the pragma allowing to leave out the quotes from dictionary keys, which is optional and only used for convenience, the code looks a lot like comparable JavaScript code. Note the (optional) use of list comprehensions, a facility JavaScript still lacks. The fact that Python dictionary literals are mapped to JavaScript object literals is of no concern to the developer; they can use the Plotly JavaScript documentation while writing Python code. No conversion is done behind the scenes. A Transcrypt dict IS a JavaScript object, in all cases.

In unifying the type systems, name clashes occur. Python and JavaScript strings both have a split (), but their semantics have important differences. There are many cases of such clashes and, since both Python and JavaScript are evolving, future clashes are to be expected.

To deal with these, Transcrypt supports the notion of aliases. Whenever in Python .split is used, this is translated to .py_split, a JavaScript function having Python split semantics. In native JavaScript code split will refer to the native JavaScript split function as it should. However, the JavaScript native split method can also be called from Python, where it is called js_split. While methods like these predefined aliases are available in Transcrypt, the developer can define new aliases and undefine existing ones. In this way any name clashes resulting from the unified type system can be resolved without run time penalty, since aliases do their work compile time.

Aliases also allow generation of any JavaScript identifier from a Python identifier. An example is the $ character, that is allowed as part of a name in JavaScript but forbidden in Python. Transcrypt strictly conforms Python syntax and is parsed by the native CPython parser, making its syntax identical. A piece of code using JQuery may look as follows:

Since Transcrypt uses compilation rather than interpretation, imports have to be decided upon compile time, to allow joined minification and shipment of all modules involved. To this end C-style conditional compilation is supported, as can be seen in the following code fragment:

The same mechanism is used in the Transcrypt runtime to switch between JavaScript 5 and JavaScript 6 code:

In this way optimizations in newer JavaScript versions are taken into account, retaining backward compatibility. In some cases, the possibility for optimization is preferred over isomorphism:

Some optimizations are optional, such as the possibility to activate call caching, resulting in repeated calls to inherited methods being done directly, rather than through the prototype chain.

Static versus dynamic typing: Scripting languages growing mature

There has been a resurgence in appreciation of the benefits of static typing, with TypeScript being the best known example. In Python, as opposed to JavaScript, static typing syntax is an integral part of the language and supported by the native parser. Type checking itself, however, is left to third party tools, most notably mypy, a project from Jukka Lehtosalo with regular contributions of Python initiator Guido van Rossum. To enable efficient use of mypy in Transcrypt, the Transcrypt team contributed a lightweight API to the project, that makes it possible to activate mypy from another Python application without going through the operating system. Although mypy is still under development, it already catches an impressive amount of typing errors at compile time. Static type checking is optional and can be activated locally by inserting standard type annotations. A trivial example of the use of such annotations is the mypy in-process API itself:

As illustrated by the example, static typing can be applied where appropriate, in this case in the signature of the run function, since that is the part of the API module that can be seen from the outside by other developers. If anyone misinterprets the parameter types or the return type of the API, mypy will generate a clear error message, referring to the file and line number where the mismatch occurs.

The concept of dynamic typing remains central to languages like Python and JavaScript, because it allows for flexible data structures and helps to reduce the amount of source code needed to perform a certain task. Source code size is important, because to understand and maintain source code, the first thing that has to happen is to read through it. In that sense, 100 kB of Python source code offers a direct advantage over 300 kB of C++ source that has the same functionality, but without the hard to read type definitions using templates, explicit type inspection and conversion code, overloaded constructors and other overloaded methods, abstract base classes to deal with polymorphic data structures and type dependent branching.

For small scripts well below 100kB source code and written by one person, dynamic typing seems to only have advantages. Very little planning and design are needed; everything just falls into place while programming. But when applications grow larger and are no longer built by individuals but by teams, the balance changes. For such applications, featuring more than roughly 200kB source code, the lack of compile time type checking has the following consequences:

An interface featuring even one parameter that may refer to a complex, dynamically typed object structure, cannot be considered sufficiently stable to warrant separation of concerns. While this type of 'who did what, why and when' programming accounts for tremendous flexibility, it also accounts for design decisions being postponed to the very last, impacting large amounts of already written code, requiring extensive modifications.

The 'coupling and cohesion' paradigm applies. It's OK for modules to have strong coupling of design decisions on the inside. But between modules there should preferably be loose coupling, a design decision to change the inner workings of one module should not influence the others. In general, this leads to the following rules of the thumb for the choice between dynamic and static typing.

So while the current surge in static typing may seem like a regression, it isn't. Dynamic typing has earned its place and it won't go away. The opposite is also true: even a traditionally statically typed language like C# has absorbed dynamic typing concepts. But with the complexity of applications written in languages like JavaScript and Python growing, effective modularization, cooperation and unit validation strategies gain importance. Scripting languages are coming of age.

Why choose Python over JavaScript on the client?

Due to the immense popularity of programming for the web, JavaScript has drawn lots of attention and investment. There are clear advantages in having the same language on the client and on the server. An important advantage is that it becomes possible to move code from server to client in a late stage, when an application is upscaled.

Another advantage is unity of concept, allowing developers to work both on the front end and the back and without constantly switching between technologies. The desirability of decreasing the conceptual distance between the client and server part of an application has resulted in the popularity of a platform like Node.js. But at the same time, it carries the risk of expanding the 'one size fits all' reality of current web client programming to the server. JavaScript is considered a good enough language by many. Recent versions finally start to support features like class based OO (be it in the form of a thin varnish over its prototyping guts), modules and namespaces. With the advent of TypeScript, the use of strict typing is possible, though incorporating it in the language standard is probably some years away.

But even with these features, JavaScript isn't going to be the one language to end all languages. A camel may resemble a horse designed by a committee, but it never becomes one. What the browser language market needs, in fact what any free market needs, is diversity. It means that the right tool can be picked for the job at hand. Hammers for nails, and screwdrivers for screws. Python was designed with clean, concise readability in mind right from the start. The value of that shouldn't be underestimated.

JavaScript will probably be the choice of the masses in programming the client for a long time to come. But for those who consider the alternative, what matters to continuity is the momentum behind a language, as opposed to an implementation of that language. So the most important choice is not which implementation to use, but which language to choose. In that light Python is an effective and safe choice. Python has a huge mindshare and there's a growing number of browser implementations for it, approaching the golden standard of CPython closer and closer while retaining performance.

While new implementations may supersede existing ones, this process is guided by a centrally guarded consensus over what the Python language should entail. Switching to another implementation will always be easier than switching to the next JavaScript library hype or preprocessor with proprietary syntax to deal with its shortcomings. Looking at the situation in the well-established server world, it is to be expected that multiple client side Python implementations will continue to exist side by side in healthy competition. The winner here is the language itself: Python in the browser is there to stay.

Jacques de Hooge MSc is a C++/Python developer living in Rotterdam, the Netherlands. After graduating from the Delft University of Technology, department of Information Theory, he started his own company, GEATEC engineering, specializing in Realtime Controls, Scientific Computation, Oil and Gas Prospecting and Medical Imaging.He is a part-time teacher at the Rotterdam University of Applied Sciences, where he teaches C++, Python, Image Processing, Artificial Intelligence, Robotics, Realtime Embedded Systems and Linear Algebra. Currently he's developing cardiological research software for the Erasmus University in Rotterdam. Also he is the initiator and the lead designer of the Transcrypt open source project.

Visit link:
Transcrypt: Anatomy of a Python to JavaScript Compiler - InfoQ.com

Neuroscience has identified why some works of art become universal phenomenons – Quartz

What music we love is usually a matter of personal taste. But there are some works of art that seem to transcend differences in personal aesthetics and rise to universal acclaim. Over the past 18 months, Lin-Manuel Mirandas hip-hop musical Hamilton has emerged as one such cultural phenomenon.

What is it about Hamilton that resonates so broadly? Psychology and neuroscience suggest that the magic formula for universal acclaim often comes down to a simple equation: take something familiar, and combine it with something that feels entirely new.

With or without formal musical training, most people gain an informal education in the musical patterns typical to our culture as we grow up. Lullabies, pop songs on the radio, symphonies sampled on Looney Tunes, and middle-school jazz band practice train our ears to recognize common themes. In this way, we develop expectations about musical rhythms and the way they make use of consonance (pleasant, calming sounds) and dissonance (sounds that lead to tension or irritation).

This familiarity also means that were able to anticipate musical changes. When we hear a new song, we can usually predict the introduction of the chorus after an eight-bar verse or a high note at the end of the bridge. Therefore elements of musical surprisesuch as novel lyrics, clever harmonic changes, or an unanticipated breakdownliterally excite our brain. According to a 1999 paper by neuroscientist Anne Blood and colleagues, these types of musical surprises elicit heightened activity in the brains auditory and frontal regions, where tonality is tracked and interpreted. Pleasant music is shown to cause increased activation in the medial rostral prefrontal cortexa region used to self-monitor our emotional and mental states, or those of others. This suggests that when music gives us a pleasant surprise, it helps promote the feeling that all is right with the world.

But just because we find a piece of music novel doesnt necessarily mean that well enjoy it. Humans prefer our stimuli to strike the perfect balance between simplicity and familiarity on one hand, and complexity and novelty on the other. This is because humans and other animals have evolved to feel an arousing mixture of fear and curiosity in the presence of new things.

When an encounter is sufficiently rewarding, we experience what neurobiologists Kent Berridge and Morten Kringelbach call core likinga physiologically pleasant feeling that influences our future judgments and actions, motivating us to revisit the experience. Core liking depends on our appetites for cognitive effort as well as the amount of stimulation we find pleasant. For example, psychologist Philip A. Russell pointed out that once exposure to a popular song hits our personal saturation point, we limit how often we hear it. Novel items require more cognitive effortbut were willing to make the effort in those categories that interest us.

Hamiltons monumental success, therefore, can be attributed to its unique combination of the familiar and the novel. Its musical foundation is hip-hopa genre that has dominated popular music for a few decades now. But its quite unusual to see hip-hop applied to material straight out of history books. And so when we hear Marquis de Lafayette beat-boxing, or listen to the story of the battle of Yorktown overlaid against a chorus inspired by Mary J. Blige, the recognizable elements trigger a release of dopamine in the basal ganglias caudate nucleus (a part of our brain that helps control attachments). Meanwhile, the novelty of the music engages the nucleus accumbens, the reward-seeking part of our brain. In other words, the juxtaposition of musical novelty and familiarity is more likely to engage the brains reward system, according to findings from neuroscientist Valerie Salimpoor and her colleagues.

Psychology can also help us understand the appeal of Hamiltons exuberant energy, as communicated by the musicals modern groove and urgent rap vocals. These features push us to listen to its lyrics much in the same way we did when we were teenagers. Research suggests that this phenomenon is especially powerful in adolescence. Social psychologists Morris Holbrook and Robert Schindler note that imprinting, the process by which young animals form strong and irreversible attachments to caregivers, is strongest during the critical period of our youth. Musics faculty for mediating feelings may cause teens to imprint on songs that helped them through uncertain times.

As adolescents, we bond to music at an age when our curiosity about the world is immense and our experience is small. Lyrics help us solve problems, soothe heartaches, and match our powerfully oscillating emotions. Because Hamiltons musical styling makes us feel like teens again, we listen to it with the same sense of urgency as we did when we were young. Moreover, Hamiltons young characters, captured at a moment when their personalities and achievements were still being formed, may also remind us of ourselves at our most earnest, energetic, and least self-assured periodwhen we are most open to fresh influences and thoughts.

As we get older, and we form a personal prototype of what constitutes good music, we become harder to impress. Much of our personal filter has to do with sociocultural status. Social psychologists such as Pierre Bourdieu describe a taste culture whereby professionals and working-class music lovers are drawn to genres that match their self-image. These genres roughly correspond to our ideas about what constitutes high art and low art. But Hamilton takes a historically populist art form (hip-hop) and presents it on the traditionally high brow Broadway stage, thereby eschewing easy categorization and broadening its appeal. The added bonus is that the musicals plot immerses us in an extraordinarily large and important contextthe birth of the United States of America.

In all of these ways, Hamilton is precisely calibrated to push us to listen to its soundtrack with the passion that we bring to music as young listeners, and the intellectual curiosity of adults. By linking incredibly sophisticated yet familiar musical themes to stories that are novel to all but history buffs, Hamilton reminds us that our nations founders were once beginnersand helps all of us remember what its like to be young, scrappy, and hungry.

Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

Continue reading here:
Neuroscience has identified why some works of art become universal phenomenons - Quartz