All posts by medical

How the monkeyflower gets its spots – UC Berkeley

The yellow monkeyflowers distinctive red spots serve as landing pads for bees and other pollinators, helping them access the sweet nectar inside. A new study reveals the genetic programming that creates these attractive patterns. (Image by PollyDot via PixaBay)

The intricate spotted patterns dappling the bright blooms of the monkeyflower plant may be a delight to humans, but they also serve a key function for the plant. These patterns act as bee landing pads, attracting nearby pollinators to the flower and signaling the best approach to access the sweet nectar inside.

They are like runway landing lights, helping the bees orient so they come in right side up instead of upside down, said Benjamin Blackman, assistant professor of plant and molecular biology at the University of California, Berkeley.

See the companion press release at UConn Today

In a new paper, Blackman and his group at UC Berkeley, in collaboration with Yaowu Yuan and his group at the University of Connecticut, reveal for the first time the genetic programming that helps the monkeyflower and likely other patterned flowers achieve their spotted glory. The study was published online today (Thursday, Feb. 20) in the journal Current Biology.

While we know a good deal about how hue is specified in flower petals whether it is red or orange or blue, for instance we dont know a lot about how those pigments are then painted into patterns on petals during development to give rise to these spots and stripes that are often critical for interacting with pollinators, Blackman said. Our lab, in collaboration with others, has developed the genetic tools to be able to identify the genes related to these patterns and perturb them so that we can confirm whats actually going on.

In the study, the research team used CRISPR-Cas9 gene editing to recreate the yellow monkeyflower patterns found in nature. On the left, a wild-type monkeyflower exhibits the typical spotted pattern. In the middle, a heterozygote with one normal RTO gene and one damaged RTO gene exhibits blotchier spots. And on the right, homozygote with two copies of the damaged RTO gene is all red, with no spots. (UC Berkeley photo by Srinidhi Holalu)

The positions of petals spots arent mapped out ahead of time, like submarines in a game of battleship, Blackman said. Instead, scientists have long theorized that they could come about through the workings of an activator-repressor system, following what is known as a reaction-diffusion model, in which an activator molecule stimulates a cell to produce the red-colored pigment that produces a spot. At the same time, a repressor molecule is expressed and sent to neighboring cells to instruct them not to produce the red pigment.

The results are small, dispersed bunches of red cells surrounded by cells that keep the background yellow color.

By tweaking the parameters how strongly a cell turns on an inhibitor, how strongly the inhibitor can inhibit the activator, how quickly it moves between cells it can lead to big spots, small spots, striped patterns, really interesting periodic patterns, Blackman said.

In the study, UC Berkeley postdoctoral researcher Srinidhi Holalu and research associate Erin Patterson identified two natural varieties of the yellow monkeyflower one type with the typical red spots in the throat of the flower and a second type with an all-red throat appearing in multiple natural populations in California and Oregon, including at the UC Davis McLaughlin Reserve. In parallel, UConn postdoctoral researcher Baoqing Ding worked with a very similar plant with fully red-throated flowers found when surveying a population of Lewiss monkeyflower that had induced DNA mutations.

When the scientists presented bees in the lab with the two types of monkeyflowers, they preferred the red tongue variety to the spotted variety, though the red tongue variety is less common in nature. (UC Berkeley video by Erin Patterson and Anna Greenlee)

In a previous study, the Yuan lab had found that a gene called NEGAN (nectar guide anthocyanin) acts as an activator in the monkeyflower petals, signaling the cells to produce the red pigment. Through detailed genomic analysis in both monkeyflower species, the two groups were able to pinpoint that a gene called RTO, short for red tongue, acts as the inhibitor.

The red-throated forms of the monkeyflower have defective RTO inhibitor genes, resulting in a characteristic all-red throat, rather than red spots. To confirm their findings, Holalu used the CRISPR-Cas9 gene editing system to knock out the RTO gene in spotted variants of the flower. The result was flowers with a flashy red throat. Further experiments revealed how the functional form of the RTO protein moves to neighboring cells and represses NEGAN to prevent the spread of pigmentation beyond the local spots. This study is the first reported use of CRISPR-Cas9 editing to research the biology of monkeyflowers.

The team also collaborated with Michael Blinov at the UConn School of Medicine to develop a mathematical model to explain how different self-organized patterns might arise from this genetic system.

This work is the simplest demonstration of the reaction-diffusion theory of how patterns arise in biological systems, said Yaowu Yuan, associate professor of ecology and evolutionary biology at UConn. We are closer to understanding how these patterns arise throughout nature.

Monkeyflower plants with the RTO gene knocked out by CRISPR-Cas9 gene editing produce one big patch where all flowers exhibit a fully red throat, in contrast to wild fields where red-tongued flowers appear in small dispersed spots (UC Berkeley photo by Srinidhi Holalu)

See the article here:
How the monkeyflower gets its spots - UC Berkeley

The Mind’s Reality Is Consistent with Neuroscience – Walter Bradley Center for Natural and Artificial Intelligence

In a recent podcast, neurosurgeon Michael Egnor talked with Robert J. Marks about the mind and its relationship to the brain and about different theories as to how the mind works. They talked about eliminative theories (the mind doesnt really exist) and emergent theories (the mind arises from matter) earlier and then the conversation turned to dualism:

Heres a partial transcript:

17:49 | Dualist theories of the mind

Robert J. Marks (right): Well, there is materialism and panpsychism. What other theories of mind of the mind are on tap?

Michael Egnor: Well, there are a number of dualism theories of the mind. And dualism, generally considered, is the viewpoint that mental states are not the same thing as material states, as brain states. That is, what you consider material aspects of a human being, there is a remainder that is mental, that is not material. But there are a variety of ways of looking at dualism.

18:29 | Cartesian dualism

The classical dualism way of looking at things, at least in modern philosophy, is Cartesian dualism, which was proposed by Descartes back in the 17th century. he proposed that human beings are composites of matter extended in space and spirit, which he thought of as a thinking substance. So he thought that there were two separate substances that were joined to form a human being, basically the material body joined to the immaterial spirit.

Ren Descartes (15961650) was a creative mathematician of the first order, an important scientific thinker, and an original metaphysician

It is sometimes said that Descartes dualism placed the mind outside nature by rendering it as an immaterial substance. That is a retrospective judgment from a perspective in which immaterial substances are automatically deemed unnatural. For Descartes and his followers, mindbody interaction and its laws were included within the domain of natural philosophy or physics (in the general meaning of the latter term, as the theory of nature). Descartes spoke of regular relations between brain states and the resulting sensory experiences, which his followers, such as Regis, subsequently deemed laws of mindbody relation (see Hatfield 2000). In this way, Descartes and his followers posited the existence of psychophysical or psychophysiological laws, long before Gustav Fechner (180187) formulated a science of psychophysics in the nineteenth century.

There are certainly good things to say about the Cartesian understanding of the mind and body. But I think its fundamentally misguided from a philosophical and logical standpoint and that it has actually done quite a bit of harm philosophically because it was described in the 20th century by a philosopher named Gilbert Ryle as the ghost in the machine. And that is that Descartes understood human beings to be basically biological machines that were inhabited by a ghost which was the spirit or the mind. And materialists have simply said, well, theres no ghost. So well just understand human beings as biological machines. Thats a profound error but Descartes opened the door to that.

Note: Gilbert Ryle (19001976) used the phrase ghost in the machine in an influential 1949 book, The Concept of Mind. His own behaviorist theories are not now much regarded though they were influential in encouraging other materialist approaches, for example:

With his remarkable ability to turn a phrase, what Ryle even more famously did was to stigmatize mind as the Ghost in the Machine. Unfortunately, the phrase greatly advanced the enlightenment idea of Man a Machine. And it helped prepare the way for todays revolution in cognitive science based on the computational theory of mind, with the digital computer the model for intellectual operations.

20:00 | Hylomorphism

Michael Egnor (left): The perspective that Descartes cast aside was that of hylomorphism. Thats the view that all of nature consists of a composite of form and matter. Morphism means matter and hyle is the Greek word for form. Everything in nature is a composite of form, which Aristotle would call a principle of intelligibility, and matter, which is a principle of individuation. Its a rather profound metaphysical perspective and in that perspective, the soul or the mind is the form of the body. But its a different perspective from Descartes perspective and it doesnt see mind and body as being separate substances. It sees a human being as being a unitary thing, with different principles involved but not different substances.

Note: For more thoughts on hylomorphism (hylemorphism) see Michael Egnor, How can mind interact with matter? (Mind Matters News)

21:17 | Comparing theories of the mind

Robert J. Marks: One of the criteria that you mentioned for establishing a good model of the mind-brain problem is consistency with the results of neuroscience. How do these three different theories stack up, materialism, panpsychism, and dualism?

Michael Egnor: Well, panpsychism, I can see why some very intelligent people like Dr. Chalmers have made that inference [that everything is, in some sense, conscious], I dont think panpsychism is a particularly scientific viewpoint. Realistically, there is no particular reason to think that electrons or grains of sand have minds.

See also: Are electrons conscious? A classical philosopher can explain why the belief that everything is conscious is wrong (Michael Egnor)

Robert J. Marks: Im siting here thinking, how could you ever test something like that?

Michael Egnor: Well, you could ask an electron and people have tried but the electrons dont answer

Materialists have, of course, made the claim that neuroscience completely supports materialism. I had an internet debate with Dr. Steven Novella who is a neurologist at Yale a number of years ago and hes a materialist. And Dr. Novella said that every single bit of evidence in the history of neuroscience supports materialism. Which I think is not the case.

The problem with that is that neuroscientists generally work from a materialist perspective and they ask questions of the mind and the brain from a materialist perspective. And, goodness gracious, its no surprise that if thats they way you ask the questions, then materialism always seems like its the answer

I think dualism is a much, much better explanation for many aspects of neuroscience.

Robert J. Marks: That was my next question: Do you, speaking as an experienced neurosurgeon who has played around with the brains of many, many people, what do you believe? Do you believe that the mind is distinct from the brain, as a dualist does?

Michael Egnor: I think that, first of all, if you want to understand the mind and the brain, you need to start with a solid metaphysical foundation. And I think hylomorphism is a solid metaphysical foundation. I dont think Cartesian dualism is a good metaphysical foundation and I certainly dont think materialism is a good metaphysical foundation.

I think the best explanation of the relationship of the mind to the brain is Aristotelian hylomorphism which is the viewpoint that the soul is the form of the body and that certain powers of the soul, particularly the intellect and will, are not generated by matter but are immaterial thingswhat Thomas Aquinas would call the spirit. But other properties of the mind, like perception and memory and imagination are physical. They are directly related to brain matter and they are generated by brain matter. I think thats the best explanation philosophically for what we find in neuroscience.

Heres a brief introduction to hylomorphism:

Form and matter considered on their own are merely concepts in the mind; in things they are two distinct principles that make the one unified individual thing. The substantial form makes a thing what it is and the accidental forms (e.g. quantity and quality) modify it to have the types of quantity and qualities it has. So a substantial form makes a cat a cat, but an accidental form makes it a black cat.

What differentiates Seabiscuit from Secretariat is not horse-ness, since they are both horses; matter makes Seabiscuit this particular horse and Secretariat that particular horse.

Show Notes

00:37 | Introducing Dr. Michael Egnor, Professor of Neurosurgery and Pediatrics at State University of New York, Stony Brook01:32 | We can use our minds to understand our minds01:55 | What defines a good theory of the mind?02:26 | The mind vs. the soul03:51 | The self-refuting theory of eliminative materialism07:12 | A reasonably good explanation that fits the facts08:09 | What theories of the mind make sense?08:32 | A materialist perspective of the mind10:04 | The idea of emergence11:26 | The wetness of water13:27 | Qualia the way things feel14:17 | Two problems of explaining consciousness15:40 | Panpsychism17:49 | Dualist theories of the mind18:29 | Cartesian dualism20:00 | Hylomorphism21:17 | Comparing theories of the mind25:32 | The emerging field of neuroscience and its effect on theories of the mind

See also the earlier parts of the discussion: Why eliminative materialism cannot be a good theory of the mind. Thinking that the mind is simply the brain, no more and no less, involves a hopeless contradiction. How can you have a proposition that the mind doesnt exist? That means propositions dont exist and that means, in turn, that you dont have a proposition.

and

Why the mind cannot just emerge from the brain. The mind cannot emerge from the brain if the two have no qualities in common. In his continuing discussion with Robert J. Marks, Michael Egnor argues that emergence of the mind from the brain is not possible because no properties of the mind have any overlap with the properties of brain. Thought and matter are not similar in any way. Matter has extension in space and mass; thoughts have no extension in space and no mass.

Read the original post:
The Mind's Reality Is Consistent with Neuroscience - Walter Bradley Center for Natural and Artificial Intelligence

Ted W. Simon is being recognized by Continental Who’s Who – Yahoo Finance

WINSTON, Ga., Feb. 21, 2020 /PRNewswire/ --Ted W. Simon is being recognized by Continental Who's Who as a Top Expert in the field of Education and Science as a Principal at Ted Simon, LLC.

An award-winning toxicologist and scientist, Dr. Simon has had a remarkable career on account of his expertise and dedication to toxicology and science. He served as the senior toxicologist in the waste management division of the Atlanta regional office of the Environmental Protection Agency for over ten years. Since 2006, Dr. Simon has worked in scientific consulting as the principal at Ted Simon, LL. He is knowledgeable about risk assessment, mathematical modeling, statistics, neuroscience, and environmental/ecological health issues. He has taught at university classes as an adjunct professor in Environmental Health Science at the University of Georgia. He has been an invited speaker at national and international events.

Dr. Simon received a Bachelor of Arts in biology from Middlebury College in 1971. After several years of working, he decided to continue his biological studies at George State University in Atlanta, and, in 1971 received his Ph.D. in neurobiology and behavior. His doctoral thesis titled "The Neural Basis of Light Evolved Walking in Crayfish" was recognized with an Honorable Mention for the Donald B. Lindsley Prize in Neuroscience. For several years after his Ph.D., he worked as a postdoctoral fellow at Emory University in cellular neuroscience.

Dr. Simon is a diplomate of the American Board of Toxicology (ABT) and a professional member of the Society of Toxicology (SOT) and the Society for Risk Analysis (SRA). Previously, he was a member of the Society for Neuroscience (SFN), concluding his membership in 1993.

Dr. Simon is a diplomate of the American Board of Toxicology (ABT) and a professional member of the Society of Toxicology (SOT) and the Society for Risk Analysis (SRA). Previously, he was a member of the Society for Neuroscience (SFN), concluding his membership in 1993.

Dr. Simon's awards include EPA's Science Achievement Award in 2002 for his work on "Risk Assessment Guidance for Superfund (RAGS): Volume III Probabilistic Risk Assessment". In 2017, Dr. Simon and his co-authors received an award for best paper of the year from the Risk Assessment Specialty Section of the Society of Toxicology for a work titled "How can carcinogenicity be predicted by high throughput "characteristics of carcinogens mechanistic data?". The full paper is available online.

Dr. Simon's publications include thirty peer-reviewed journals and one textbook: Environmental Risk Assessment: A Toxicological Approach, 2nd Edition. The textbook will be available in early 2020.

Outside of work, Dr. Simon enjoys photography, playing the violin, fishing, and spending time with his family. He and his wife Elizabeth have two children, Adam and Rebecca, and four grandchildren.

For more information, please visit http://www.tedsimon-toxicology.com

Contact: Katherine Green, 516-825-5634, pr@continentalwhoswho.com

View original content to download multimedia:http://www.prnewswire.com/news-releases/ted-w-simon-is-being-recognized-by-continental-whos-who-301009205.html

SOURCE Continental Who's Who

Continued here:
Ted W. Simon is being recognized by Continental Who's Who - Yahoo Finance

Cognition in schizophrenia: a missing piece of the therapeutic puzzle – PLoS Blogs

Note: This post was written by Jessica Brown, PhD student at the University of Manchester.

What kind of mental image springs to mind upon reading the word schizophrenia? Many envisage an individual locked in a dark institution, constantly plagued by non-existent voices and vivid hallucinations. Even as a final year BSc Biology student with a neuroscience research placement under my belt, I too was guilty of this reflex association. Upon skimming through project titles on FindaPhD.com, the word schizophrenia jumped out of the page. My excitement was sparked as I envisaged myself unravelling the intricacies of psychosis. As I examined the project title more closely, I admittedly experienced a minor surge of disappointment: the research was interested in targeting the cognitive deficits of schizophrenia. Cognitive deficits? I was unaware that cognition was significantly impaired in schizophrenia patients. And even if it was, did these symptoms really warrant extensive investigation? Surely, in the context of a disorder characterised by multimodal hallucinations and debilitating delusions, cognitive difficulties shouldnt be an urgent therapeutic priority.

The failure of current antipsychotics

A few hours of literature research and an interview with my PhD supervisor later, my appreciation of schizophrenia had been completely transformed.

Fortunately for our hypothetical institutionalised patient, modern antipsychotic drugs combating positive, psychotic symptoms have allowed many individuals to successfully function and flourish within their communities. So why, my supervisor pointed out to me, do so many schizophrenia patients still fail to achieve independent living, find employment and form relationships? Even more alarmingly, why are rates of symptomatic relapse so high? By the end of our conversation, I was convinced: the answer lies in the debilitating cognitive disturbances suffered by individuals, too often overlooked by research and crucially neglected by current drug therapies.

Cognitive impairment in schizophrenia: an unmet clinical need

Schizophrenia is a staggeringly heterogeneous disorder, with symptoms manifesting very differently in each patient. Amidst this variety, cognitive deficits are a consistent feature, persisting independently of circumstances such as medication, institutionalisation and advancement in cognition assessment tools. In particular, patients struggle in areas of verbal learning, processing speed and working memory.

Cognitive functioning in schizophrenia has been subjected to decades of research. However, the true impact of cognition upon disease outcomes has only recently come to light. A plethora of studies have drawn links between poor cognitive performance and impaired psychosocial functioning. One might argue that this is a rather obvious association. But why does it matter? Closer consideration reveals the enormous impact this has on daily life: if a schizophrenia patient is unable to perform hygiene-related tasks and keep up with their medications, they have little hope of finding employment or successfully integrating into community living.

As recently as January 2020, research has emphasised the detrimental effects of poor cognition. An Ecuadorian study conducted at the psychiatric Kennedy Hospital used the SCIP (Screening of Cognitive Deterioration in Psychiatry) tool alongside questionnaires assessing quality of life and sociodemographic status to reveal the inverse relationship between cognitive impairment and quality of life as perceived by the patient.

Even considering the impact of untreated cognitive symptoms upon quality of life, it is still reasonable to pose the question: so what? The sad reality is that for many patients, cognitive difficulties make antipsychotic drugs a futile intervention, leading to symptomatic remission and a substantial waste of resources. As if the significance of cognitive impairment had not been sufficiently demonstrated, a Swedish study following over 500 schizophrenia patients made the staggering finding that executive function independently predicted premature death.

Therapeutic intervention: a multi-pronged approach

In the face of such alarming data, it is unsurprising that the cognitive deficits of schizophrenia have become an urgent therapeutic target. But how can cognition be elevated? Amongst the most promising interventions are drugs targeting NMDA receptors located on neurons in the brain, these receptors mediate signalling crucial for learning and memory functions. One such medication is memantine, which has shown some promise in schizophrenia patients.

Unfortunately, using pharmacological treatments to improve cognition is far from straightforward. It is critical to remember that these patients still rely upon antipsychotics to manage positive symptoms, which often interfere with the activity of cognition-targeting drugs. Even without this complication, is it rational to expect a single-target approach to be effective in treating such a complex, multi-faceted disorder? This is where cognitive remediation therapy comes in. Using behavioural training, this technique is not only shown to improve performance across numerous cognitive domains, but also delay the relapse of symptoms.

Concluding thoughts

As scientists, I believe we are often drawn to the one size fits all approach: current medicine is geared toward identifying a magic bullet to target a single, disease-causing agent. The game plan is clear: find this drug, roll it out to patients and the problem will be solved.

Sadly, as research continues to search for successful schizophrenia treatment strategies, one thing is becoming painstakingly clear: one size does not fit all. A particular cocktail of drugs and behavioural therapies allowing one patient to thrive may be completely unsuccessful in another. Encouragingly, current efforts are directed toward identifying patients most likely to benefit from certain treatment strategies, using biological indicators or biomarkers.

In the world of science, it is all too easy to become immersed in the daily frustrations and unsolved mysteries of research and forget why one is even investigating a particular disorder. As a colleague in neuroscience R&D at Eli Lilly once said to me: in every meeting, there should always be a chair reserved for the most important person in the room. And that person is the patient.

There is an undeniably long way to go before schizophrenia patients will be able to make a complete recovery, with a low risk of relapse and a satisfactory quality of life. But recognising cognition as the wrongly neglected aspect of schizophrenia is certainly a step in the right direction.

References:

J, Avila, Villacrs L, Rosado D, and Loor E. Cognitive Deterioration and Quality of Life in Patients with Schizophrenia: A Single Institution Experience. Cureus 12, no. 1 (25 January 2020).

Molina, Juan, and Ming T. Tsuang. Neurocognition and Treatment Outcomes in Schizophrenia. In Schizophrenia Treatment Outcomes: An Evidence-Based Approach to Recovery, edited by Amresh Shrivastava and Avinash De Sousa, 3541. Cham: Springer International Publishing, 2020.

Schaefer, Jonathan, Evan Giangrande, Daniel R. Weinberger, and Dwight Dickinson. The Global Cognitive Impairment in Schizophrenia: Consistent over Decades and around the World. Schizophrenia Research 150, no. 1 (October 2013): 4250.

Evans, Jovier D., Robert K. Heaton, Jane S. Paulsen, Barton W. Palmer, Thomas Patterson, and Dilip V. Jeste. The Relationship of Neuropsychological Abilities to Specific Domains of Functional Capacity in Older Schizophrenia Patients. Biological Psychiatry 53, no. 5 (1 March 2003): 42230.

Semkovska, Maria, Marc-Andr Bdard, Lucie Godbout, Frdrique Limoge, and Emmanuel Stip. Assessment of Executive Dysfunction during Activities of Daily Living in Schizophrenia. Schizophrenia Research 69, no. 23 (1 August 2004): 289300.

Tsai, G. E. Ultimate Translation: Developing Therapeutics Targeting on N-Methyl-d-Aspartate Receptor. Advances in Pharmacology (San Diego, Calif.) 76 (2016): 257309.

Thomas, Michael L., Michael F. Green, Gerhard Hellemann, Catherine A. Sugar, Melissa Tarasenko, Monica E. Calkins, Tiffany A. Greenwood, et al. Modeling Deficits From Early Auditory Information Processing to Psychosocial Functioning in Schizophrenia. JAMA Psychiatry 74, no. 1 (1 January 2017): 3746.

Trapp, Wolfgang, Michael Landgrebe, Katharina Hoesl, Stefan Lautenbacher, Bernd Gallhofer, Wilfried Gnther, and Goeran Hajak. Cognitive Remediation Improves Cognition and Good Cognitive Performance Increases Time to RelapseResults of a 5 Year Catamnestic Study in Schizophrenia Patients. BMC Psychiatry 13 (9 July 2013): 184.

Helldin, Lars, Fredrik Hjrthag, Anna-Karin Olsson, and Philip D. Harvey. Cognitive Performance, Symptom Severity, and Survival among Patients with Schizophrenia Spectrum Disorder: A Prospective 15-Year Study. Schizophrenia Research 169, no. 13 (December 2015): 14146.

Featured Image#9/100 Jigsawbelongs to the flickr account ofRum Bucolic Apeand is used under a CC BY-ND 2.0 Creative Commons CC license

Images in text

Read this article:
Cognition in schizophrenia: a missing piece of the therapeutic puzzle - PLoS Blogs

Cognitive Assessment & Training Market Growth Prospect: is the tide Turning – Chronicles 99

A New Syndicate Global Cognitive Assessment & Training Market Study is added in HTF MI database compiled covering key business segments and wider geographical scope to get deep dive analysed market data. The study brings a perfect bridging between qualitative and statistical data ofCognitive Assessment & Training market. The study provides historical data (i.e. Consumption** & Value) from 2014 to 2018 and forecasted till 2026*. Some are the key & emerging players that are part of coverage and have being profiled are Neurotrack, Cogniciti, Intendu, Halo Neuroscience, Cognetivity, Brightlamp, Edsix Brain Lab, BrainCheck & InteraXon.

Know how you are perceived in comparison to your competitors like Neurotrack, Cogniciti, Intendu, Halo Neuroscience, Cognetivity, Brightlamp, Edsix Brain Lab, BrainCheck & InteraXon; Get an accurate view of your business in Global Cognitive Assessment & Training Marketplace. Click to getGlobal Cognitive Assessment & Training Market Research Sample PDF Copy Instantly

Market Dynamics:

Set of qualitative information that includes PESTEL Analysis, PORTER Five Forces Model, Value Chain Analysis and Macro Economic factors, Regulatory Framework along with Industry Background and Overview

Key Highlights that HTF MI is bringing with this Study Revenue splits by most promising business segments. [By Type (, Assessment, Training, Industry Segmentation, Healthcare, Education, Enterprise, Sports, Government & Defense, Channel (Direct Sales, Distributor) Segmentation, By Application () and any other business Segment if applicable within scope of report] Gap Analysis by Region. Country Level Break-up to dig out Trends and emerging opportunity available in area of your business interest. % Market Share & Sales Revenue by Key Players & Local Regional Players . Dedicated Section on Market Entropy to gain insights on Players aggressive Strategies to built market [Merger & Acquisition / Recent Funding & Investment and Key Developments] Patent Analysis** No of patents / Trademark approval filed & received in recent years. Competitive Landscape: Listed Players Company profile with SWOT, In-depth Overview, Product/Services Specification, Headquarter, Subsidiaries, Downstream Buyers and Upstream Suppliers.

Check Exclusive Discount Offers Available On this Report @https://www.htfmarketreport.com/request-discount/2500224-global-cognitive-assessment-training-market-5

Competitive Landscape:

Mergers & Acquisitions, Agreements & Collaborations, New Product Launches, Business overview & detailed matrix of Product for each player listed in the study. Players exclusively profiled are Neurotrack, Cogniciti, Intendu, Halo Neuroscience, Cognetivity, Brightlamp, Edsix Brain Lab, BrainCheck & InteraXon

Most frequently asked question:Why i cant See My company Profiled in the Study?Yes, It might be a possibility that Company you are looking for is not listed, however study is based on vast coverage of players operating inbut due to limited scope and pricing constraints we can only list few random companies keeping a mix of leaders and emerging players. Do contact us if you wish to see any specific company of your interest in the survey. Currently list of companies available in the study are Neurotrack, Cogniciti, Intendu, Halo Neuroscience, Cognetivity, Brightlamp, Edsix Brain Lab, BrainCheck & InteraXon

Segment & Regional Analysis: What Market breakdown Would be Covered by geographies, Type & Application/End-users Cognitive Assessment & Training Market Revenue & Growth Rate by Type [, Assessment, Training, Industry Segmentation, Healthcare, Education, Enterprise, Sports, Government & Defense, Channel (Direct Sales, Distributor) Segmentation, Section 11: 200 USD??Cost Structure & Section 12: 500 USD??Conclusion] (Historical & Forecast) Global Cognitive Assessment & Training Market Revenue & Growth Rate by Application [] (Historical & Forecast) Cognitive Assessment & Training Market Revenue & Growth Rate by Each Region Specified (Historical & Forecast) Cognitive Assessment & Training Market Volume & Growth Rate by Each Region Specified, Application & Type (Historical & Forecast) Cognitive Assessment & Training Market Revenue, Volume & Y-O-Y Growth Rate by Players (Base Year)

Enquire for customization in Report @https://www.htfmarketreport.com/enquiry-before-buy/2500224-global-cognitive-assessment-training-market-5

To comprehend Global Cognitive Assessment & Training market dynamics in the world mainly, the worldwide Cognitive Assessment & Training market is analyzed across major global regions. HTF also provides customized regional and country-level reports

North America: United States, Canada, and Mexico. South & Central America: Argentina, Chile, Colombia and Brazil. Middle East & Africa: Saudi Arabia, United Arab Emirates, Israel, Turkey, Egypt, Tunisia and South Africa. Europe: United Kingdom, France, Poland, Italy, Germany, Spain, NORDIC {Sweden, Norway, Finland, Denmark etc}, BENELUX {Belgium, The Netherlands, Luxembourg },and Russia. Asia-Pacific: SAARC Nations, China, Japan, South Korea, Southeast Asia, New Zealand & Australia.

Actual Numbers & In-Depth Analysis with emerging trends of Cognitive Assessment & Training Market Size Estimation Available in Full Copy of Report.

Buy Full Copy Global Cognitive Assessment & Training Report 2026 @https://www.htfmarketreport.com/buy-now?format=1&report=2500224

Thanks for reading this article, you can also get individual chapter or section or regional study by limiting the scope to just G7 or G20 or European Union Countries, Eastern Europe, East Asia or Southeast Asia.

About Author:HTF Market Report is a wholly owned brand of HTF market Intelligence Consulting Private Limited. HTF Market Report global research and market intelligence consulting organization is uniquely positioned to not only identify growth opportunities but to also empower and inspire you to create visionary growth strategies for futures, enabled by our extraordinary depth and breadth of thought leadership, research, tools, events and experience that assist you for making goals into a reality. Our understanding of the interplay between industry convergence, Mega Trends, technologies and market trends provides our clients with new business models and expansion opportunities. We are focused on identifying the Accurate Forecast in every industry we cover so our clients can reap the benefits of being early market entrants and can accomplish their Goals & Objectives.

Contact US :Craig Francis (PR & Marketing Manager)HTF Market Intelligence Consulting Private LimitedUnit No. 429, Parsonage Road Edison, NJNew Jersey USA 08837Phone: +1 (206) 317 1218sales@htfmarketreport.com

Connect with us atLinkedIn|Facebook|Twitter

More:
Cognitive Assessment & Training Market Growth Prospect: is the tide Turning - Chronicles 99

Human behavior at the intersection of many sciences – Dailyuw

People frequently ask themselves, Why did I do that? Attempting to understand how we react to and interact with changing environments has resulted in years of research on human behavior.

Neurobiologists and psychologists study the biological basis of how the brain responds under certain situations. Social scientists like anthropologists explain what factors guide our behavior and engineers are taking all these studies to design tools that enforce human interaction, intelligence, and growth.

Human nature is complex, and interdisciplinary considerations may help us answer some interesting questions about how people think, remember, and behave.

Things that are good for one's health and longevity such as finding mates, food, and children; the dopamine reward or evaluation system is important to recall that success, Sheri Mizumori, a professor in the department of psychology who studies behavioral neuroscience, said.

Dopamine is known as the feel-good neurotransmitter, a chemical messenger that relays information between neurons. It is released by the brain when we eat food, exercise, and crave sex, helping reinforce desirable behaviors by encoding values of rewards. Psychologists and neurologists have studied this through animal models that help explain how humans access their own memory to guide their actions.

From a young age, babies learn that if an outcome is not what they want, they will change, Mizumori said. Much of the brain has evolved to be a predictor of outcomes.

Memory can be thought of as a repository of past experiences that did and did not work. When we are placed in a new situation, we use strategies we learned from previous experiences to guide our actions.

You are driving behavior based on memory and [guiding] behavior correctly the next time, Mizumori said.

The brain uses decision circuits that integrate information about past values from memory and evaluates it against our motivational, or internal, state. Understanding how the brain can switch behaviors or learn new ones is known as flexible decision making.

Theoretical psychologists study human behavior from a philosophical and social standpoint. A commonly known study argues if nature or nurture genetic or acquired influences behavior.

Maslows hierarchy of needs outlines a five-tier pyramid of deficiency and being needs. Once deficiency needs the first tier are met, people strive for self-fulfillment and personal growth, behaviors that encompass the fifth tier of the pyramid.

Depression is an interesting example of behavior at the intersection of social sciences and biology. Behavioral theory argues depression results from peoples interactions with the environment and psychodynamic theory states it stems from inwardly-directed anger or loss of self-esteem.

Conversely, Mizumori explained depression from a behavioral switch, or flexible decision-making standpoint.

Researchers in human centered design and engineering (HCDE) are attempting to design technologies that can support or prompt changes in peoples behaviors.

A lot of the research projects we explore are real-world-problem driven, Gary Hsieh, an associate professor in HCDE, said. How do we encourage users to eat healthier or exercise more? These are health-related problems aligned to behavior-related problems.

By studying the needs and values of certain groups, researchers like Hsieh are able to design technologies that encourage people to communicate and interact in welfare-improving ways. In a growing age of data, engineers and scientists are able to learn about people from social networks.

Data allows us to study people in ways that we could not before, Hsieh said. It ties in with the types of interventions and applications that we can build.

Human behavior presents unknown complexities that arise from cultural, social, internal, environmental, and biological factors. Being able to integrate all those is a challenge that many will be addressing for generations to follow.

Reach reporter Vidhi Singh at science@dailyuw.com. Twitter: @vidhisvida

Like what youre reading? Support high-quality student journalism bydonating here.

Go here to read the rest:
Human behavior at the intersection of many sciences - Dailyuw

10 Common Human Behaviors Explained With Science – Listverse

We do a lot of stuff every day that most of us never even think about. Its too bad, because the explanations behind some of our most ordinary functions are quite fascinating.

Though its mostly thought of as an old wives tale, the idea that gentlemen prefer blondes has biological grounding. The average woman with blonde hair is likely to have light skin, and skin with a paler pigment will more noticeably show physical defects. So a male prefers female mate with blonde hair because he can more easily see how healthy their offspring will be.

Of course, females seek out and avoid the same qualities in males, so perhaps the adage should be that everyone prefers blondes.

There are many reasons for someone to be unfaithful, but aside from the psychological, its possible some people literally have cheating in their DNA. Scientists have discovered a gene they call RS3 334, which is colloquially becoming known as the divorce gene. In tests where men and women were asked to fill out detailed (and anonymous) questionnaires about their marriage, couples where the male of the relationship had one or more of the RS3 334 genes scored low, both describing unhappiness and frequent domestic troubles. It is thought the gene affects the bodys vasopressin release, a chemical responsible for human bonding and monogamy.

A lot of actions have become so ingrained in our culture that we dont stop to think about why we are doing them. Hugging is essentially grabbing someone for no reason and with no outcomes or time limit planned. It seems strange when analyzed like that, but the reasons can be explained: close contact with another human, such as that experienced through hugging, is linked to the release of oxytocin, a hormone responsible for attachment and trust. Its particularly useful in a relationship because the body contact occurring during sex releases oxcytocin with the aim of pairing the two together for raising offspring.

Dont have anyone to cuddle? Dont worry: your brain also releases oxytocin for things like meaningful eye contact, generous acts, and even patting a dog.

The fear of strangers most children feel can be explained chemically. Oxytocin, the very same hormone that helps us bond with people we are close with, will also compel us to distrust people we dont know.

There have been studies where participants inhale either oxytocin or a placebo and engage in group games with incentives to cooperate. When the groups featured people the participants already knew in some manner, the oxytocin caused their cooperation to risebut when the groups consisted of strangers, it caused cooperation to fall. This is possibly left over from our ancestors, who needed to trust their own tribe while maintaining a healthy, defensive fear of other tribes they came across.

We scratch all the time, but do we benefit from it at all? Scratching, or more accurately having an itch, is your bodys way of eliminating potentially harmful irritants or external objects. For example: an ant crawls onto your foot, so that area of your foot itches; you scratch that area and brush the ant away.

So it does help us, but why do we scratch so often? Its not like we are covered in bugs all the time. Well, from an evolutionary standpoint it makes sense to scratch at anything that might be dangerous. While scratching something that wasnt a threat is fine, not scratching something that is dangerous can lead to problems.

Someone offers you some chocolate. On one hand, you want to eat it, but on the other hand you are worried about weight gain. You make a deal: I can have the chocolate now, as long as I promise to go the gym tomorrow. Who exactly are you making that deal with? Technically, another personat least according to your brain.

Its severity differs for everyone, but in many cases the same part of the brain that lights up when you think about others is also used to think about your future self. Subconsciously, you literally consider your future self a different person.

Laughing is another activity that, when analyzed, seems absurd: a series of strange whooping noises following any number of things a human might find amusing. The areas of the brain that regulate laughing also regulate breathing and speech, so laughter is a very primal part of our functioning, so it surely has a purposebut what?

Scientists think that when we laugh, we communicate a playful intent, indicating to others we trust them as a group member. This explains why laughing is contagious, and tests have shown that humans are far less likely to laugh when alone.

Everyone knows we sleep at night and wake in the daybut what exactly controls that? Most of us cant make ourselves fall asleep or wake up at will, so what does?

The answer is melatonin. In the morning, exposure to light triggers a variety of chemical and hormone releases that get us going and assist us in our daily activities, and the same thing occurs for the opposite reason at night. Melatonin is a natural hormone that helps us sleep. Its made by your pineal gland which only turns on when darkness occurs. Melatonin levels will stay fairly high for roughly 12 hours before exposure to light the next morning causes them to decrease.

The problem is that our pineal gland doesnt understand artificial light, so being in dark rooms during the day or bright rooms at night drastically affects our body clock.

Have you ever wondered why people lose their temper? Anger and aggression are perhaps the feelings we feel like we can least control, and sometimes we really do have no control at all. The amygdala is one area of the brain that has been shown to cause aggression, and damage to this area results in amplified aggressive behavior. The prefrontal cortex receives impulses from the amygdala and processes other information to decide if it should take action. Damage to the amygdala through physical trauma, tumor, or birth defect can result in those impulses becoming overwhelming, causing a urges and impulses towards aggressive acts the person might not morally agree with.

Pedophilia is of course not a common or acceptable trait for humans, but in some cases it can be explained physically. In 2000, a married man suddenly developed a severe pornography addiction and pedophilic thoughts accompanied by excruciating headaches. He sought help and it was soon discovered that the man had a tumor the size of an egg growing in his brain, pressing on his prefrontal cortex, which (as previously discussed) regulates urges. When the tumor was removed, the mans behavior returned to normal and his unsavory sexual desires evaporated.

This kind of case is rare, but nevertheless possible. While we dont normally experience such severe swings, it raises the question: do you control your actions or is it just all those chemicals?

Scott Friggin tweets.

Go here to see the original:
10 Common Human Behaviors Explained With Science - Listverse

How to Train Your AI Soldier Robots (and the Humans Who Command Them) – War on the Rocks

Editors Note: This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the third question (part a.), which asks how institutions, organizational structures, and infrastructure will affect AI development, and will artificial intelligence require the development of new institutions or changes to existing institutions.

Artificial intelligence (AI) is often portrayed as a single omnipotent force the computer as God. Often the AI is evil, or at least misguided. According to Hollywood, humans can outwit the computer (2001: A Space Odyssey), reason with it (Wargames), blow it up (Star Wars: The Phantom Menace), or be defeated by it (Dr. Strangelove). Sometimes the AI is an automated version of a human, perhaps a human fighters faithful companion (the robot R2-D2 in Star Wars).

These science fiction tropes are legitimate models for military discussion and many are being discussed. But there are other possibilities. In particular, machine learning may give rise to new forms of intelligence; not natural, but not really artificial if the term implies having been designed in detail by a person. Such new forms of intelligence may resemble that of humans or other animals, and we will discuss them using language associated with humans, but we are not discussing robots that have been deliberately programmed to emulate human intelligence. Through machine learning they have been programmed by their own experiences. We speculate that some of the characteristics that humans have evolved over millennia will also evolve in future AI, characteristics that have evolved purely for their success in a wide range of situations that are real, for humans, or simulated, for robots.

As the capabilities of AI-enabled robots increase, and in particular as behaviors emerge that are both complex and outside past human experience, how will we organize, train, and command them and the humans who will supervise and maintain them? Existing methods and structures, such as military ranks and doctrine, that have evolved over millennia to manage the complexity of human behavior will likely be necessary. But because robots will evolve new behaviors we cannot yet imagine, they are unlikely to be sufficient. Instead, the military and its partners will need to learn new types of organization and new approaches to training. It is impossible to predict what these will be but very possible they will differ greatly from approaches that have worked in the past. Ongoing experimentation will be essential.

How to Respond to AI Advances

The development of AI, especially machine learning, will lead to unpredictable new types of robots. Advances in AI suggest that humans will have the ability to create many types of robots, of different shapes, sizes, or degrees of independence or autonomy. It is conceivable that humans may one day be able to design tiny AI bullets to pierce only designated targets, automated aircraft to fly as loyal wingmen alongside human pilots, or thousands of AI fish to swim up an enemys river. Or we could design AI not as a device but as a global grid that analyzes vast amounts of diverse data. Multiple programs funded by the Department of Defense are on their way to developing robots with varying degrees of autonomy.

In science fiction, robots are often depicted as behaving in groups (like the robot dogs in Metalhead). Researchers inspired by animal behaviors have developed AI concepts such as swarms, in which relatively simple rules for each robot can result in complex emergent phenomena on a larger scale. This is a legitimate and important area of investigation. Nevertheless, simply imitating the known behaviors of animals has its limits. After observing the genocidal nature of military operations among ants, biologists Bert Holldobler and E. O. Wilson wrote, If ants had nuclear weapons, they would probably end the world in a week. Nor would we want to limit AI to imitating human behavior. In any case, a major point of machine learning is the possibility of uncovering new behaviors or strategies. Some of these will be very different from all past experience; human, animal, and automated. We will likely encounter behaviors that, although not human, are so complex that some human language, such as personality, may seem appropriately descriptive. Robots with new, sophisticated patterns of behavior may require new forms of organization.

Military structure and scheme of maneuver is key to victory. Groups often fight best when they dont simply swarm but execute sophisticated maneuvers in hierarchical structures. Modern military tactics were honed over centuries of experimentation and testing. This was a lengthy, expensive, and bloody process.

The development of appropriate organizations and tactics for AI systems will also likely be expensive, although one can hope that through the use of simulation it will not be bloody. But it may happen quickly. The competitive international environment creates pressure to use machine learning to develop AI organizational structure and tactics, techniques, and procedures as fast as possible.

Despite our considerable experience organizing humans, when dealing with robots with new, unfamiliar, and likely rapidly-evolving personalities we confront something of a blank slate. But we must think beyond established paradigms, beyond the computer as all-powerful or the computer as loyal sidekick.

Humans fight in a hierarchy of groups, each soldier in a squad or each battalion in a brigade exercising a combination of obedience and autonomy. Decisions are constantly made at all levels of the organization. Deciding what decisions can be made at what levels is itself an important decision. In an effective organization, decision-makers at all levels have a good idea of how others will act, even when direct communication is not possible.

Imagine an operation in which several hundred underwater robots are swimming up a river to accomplish a mission. They are spotted and attacked. A decision must be made: Should they retreat? Who decides? Communications will likely be imperfect. Some mid-level commander, likely one of the robot swimmers, will decide based on limited information. The decision will likely be difficult and depend on the intelligence, experience, and judgment of the robot commander. It is essential that the swimmers know who or what is issuing legitimate orders. That is, there will have to be some structure, some hierarchy.

The optimal unit structure will be worked out through experience. Achieving as much experience as possible in peacetime is essential. That means training.

Training Robot Warriors

Robots with AI-enabled technologies will have to be exercised regularly, partly to test them and understand their capabilities and partly to provide them with the opportunity to learn from recreating combat. This doesnt mean that each individual hardware item has to be trained, but that the software has to develop by learning from its mistakes in virtual testbeds and, to the extent that they are feasible, realistic field tests. People learn best from the most realistic training possible. There is no reason to expect machines to be any different in that regard. Furthermore, as capabilities, threats, and missions evolve, robots will need to be continuously trained and tested to maintain effectiveness.

Training may seem a strange word for machine learning in a simulated operational environment. But then, conventional training is human learning in a controlled environment. Robots, like humans, will need to learn what to expect from their comrades. And as they train and learn highly complex patterns, it may make sense to think of such patterns as personalities and memories. At least, the patterns may appear that way to the humans interacting with them. The point of such anthropomorphic language is not that the machines have become human, but that their complexity is such that it is helpful to think in these terms.

One big difference between people and machines is that, in theory at least, the products of machine learning, the code for these memories or personalities, can be uploaded directly from one very experienced robot to any number of others. If all robots are given identical training and the same coded memories, we might end up with a uniformity among a units members that, in the aggregate, is less than optimal for the unit as a whole.

Diversity of perspective is accepted as a valuable aid to human teamwork. Groupthink is widely understood to be a threat. Its reasonable to assume that diversity will also be beneficial to teams of robots. It may be desirable to create a library of many different personalities or memories that could be assigned to different robots for particular missions. Different personalities could be deliberately created by using somewhat different sets of training testbeds to develop software for the same mission.

If AI can create autonomous robots with human-like characteristics, what is the ideal personality mix for each mission? Again, we are using the anthropomorphic term personality for the details of the robots behavior patterns. One could call it a robots programming if that did not suggest the existence of an intentional programmer. The robots personalities have evolved from the robots participation in a very large number of simulations. It is unlikely that any human will fully understand a given personality or be able to fully predict all aspects of a robots behavior.

In a simple case, there may be one optimum personality for all the robots of one type. In more complicated situations, where robots will interact with each other, having robots that respond differently to the same stimuli could make a unit more robust. These are things that military planners can hope to learn through testing and training. Of course, attributes of personality that may have evolved for one set of situations may be less than optimal, or positively dangerous, in another. We talk a lot about artificial intelligence. We dont discuss artificial mental illness. But there is no reason to rule it out.

Of course, humans will need to be trained to interact with the machines. Machine learning systems already often exhibit sophisticated behaviors that are difficult to describe. Its unclear how future AI-enabled robots will behave in combat. Humans, and other robots, will need experience to know what to expect and to deal with any unexpected behaviors that may emerge. Planners need experience to know which plans might work.

But the human-robot relationship might turn out to be something completely different. For all of human history, generals have had to learn their soldiers capabilities. They knew best exactly what their troops could do. They could judge the psychological state of their subordinates. They might even know when they were being lied to. But todays commanders do not know, yet, what their AI might prove capable of. In a sense, it is the AI troops that will have to train their commanders.

In traditional military services, the primary peacetime occupation of the combat unit is training. Every single servicemember has to be trained up to the standard necessary for wartime proficiency. This is a huge task. In a robot unit, planners, maintainers, and logisticians will have to be trained to train and maintain the machines but may spend little time working on their hardware except during deployment.

What would the units look like? What is the optimal unit rank structure? How does the human rank structure relate to the robot rank structure? There are a million questions as we enter uncharted territory. The way to find out is to put robot units out onto test ranges where they can operate continuously, test software, and improve machine learning. AI units working together can learn and teach each other and humans.

Conclusion

AI-enabled robots will need to be organized, trained, and maintained. While these systems will have human-like characteristics, they will likely develop distinct personalities. The military will need an extensive training program to inform new doctrines and concepts to manage this powerful, but unprecedented, capability.

Its unclear what structures will prove effective to manage AI robots. Only by continuous experimentation can people, including computer scientists and military operators, understand the developing world of multi-unit human and robot forces. We must hope that experiments lead to correct solutions. There is no guarantee that we will get it right. But there is every reason to believe that as technology enables the development of new and more complex patterns of robot behavior, new types of military organizations will emerge.

Thomas Hamilton is a Senior Physical Scientist at the nonprofit, nonpartisan RAND Corporation. He has a Ph.D. in physics from Columbia University and was a research astrophysicist at Harvard, Columbia, and Caltech before joining RAND. At RAND he has worked extensively on the employment of unmanned air vehicles and other technology issues for the Defense Department.

Image: Wikicommons (U.S. Air Force photo by Kevin L. Moses Sr.)

See the rest here:
How to Train Your AI Soldier Robots (and the Humans Who Command Them) - War on the Rocks

The Art of Animal Adaptation – Scientific American

Given the changes in our climate and the growth of the human population, animals are increasingly being forced to adapt to human behavior in unexpected ways. Whether its crocodiles using pool noodles as flotation devices, coyotes becoming more nocturnal to avoid people or a huddle of walruses sinking a research vessel that invaded their territory, animals are figuring out how to navigate the world we have created. Ive created an artists book, the Field Guide to Animal Adaptation, which identifies and illustrates 16 examples of this phenomenon, providing both hope and despair for the coexistence of people and animals in the future. Ive launched a Kickstarter campaign to fund the printing of a limited edition.

The idea for this book started when I stumbled across an article about mountain goats in Olympic National Park being airlifted to a less populated area because they had become addicted to hikers urine. The goats threatened park visitors in their quest for that precious salty liquid. It seemed both ridiculous and tragic to me that the National Park Service thought that spending several millions of dollars to relocate these animals would be more successful than expecting people not to pee in the woods. I found more absurd and sad examples of these human-animal interactions, and so the idea for this field guide was born.

As an artist, Im interested in how images canor cantcommunicate scientific ideas to the general public. Today climate scientists are recognizing the ability of art to communicate complex scientific information and possibly influence behavior to mitigate the effects of climate change. As one recent study put it, art can elicit visceral, emotional responses and engage the imagination in ways that prompt action or behavior change that purely scientific, fact-based or cognitive approaches dont seem to evoke. Can stories and images of individual animals prompt reactions and, hopefully, action around animal conservation?

As a layperson, I struggle with understanding the patterns and impact of climate change on a large scale and the implications of being in the midst of mass extinction. Like too many, I get some of my information from clickbait interpretations of scientific reports. The urgency comes when I think about the these global changes impact on my child or see images of my favorite animals starving or deadin other words, when the global is made personal and emotional.

Art is a vehicle to develop empathy. Art encourages understanding because the process of comprehending art is an emotional reasoning that, in the words of visual artist and a cultural anthropologist Lydia Nakashima Degarrod, is neither purely cognitive and imaginative nor purely emotional, but is a combination of both. As an artist, I aim to translate complex concepts into images that people respond to viscerally and emotionally but also consider intellectually.

My goal is that my images of absurdity, such as a coconut octopus using a plastic cup as a shelter, get people to think about their own consumption habits. The image creates an uncanny disconnect between our static or ahistorical expectations of the natural world (coconut octopuses sheltering in materials from their natural surroundings) and the reality of animals adapting to an ecosystem polluted by humans (the abundance of single-use plastics in the ocean), which jolts us into considering these stories in a new light.

In 2016 and 2018 I participated in exhibition projects created by Creature Conserve, an organization that brings artists and scientists together to foster informed and sustained support for animal conservation. I talked with a shark veterinarian and made a short animation about the effects of the trade of shark fins on whale sharks. I corresponded with a bat researcher and created images about the resilience of the animals bone structure. These pieces and the works of many other artists were exhibited at Rhode Island School of Design and the National Museum of Wildlife Art. After these experiences, I wanted to create another project that moved these animal stories out of gallery spaces and brought them to people in a more intimate way: a book.

Physical books create a connection. Books are held less than a few feet away from our eyes, we have to touch them to turn the page, and we can look at each individual page and image for as long as we want. The Field Guide to Animal Adaptation is modeled after popular field guides in size and structure. The animal adaptation stories and illustrations are organized by theme, with range maps and species information. There is a section on how to create ones own field notes and resources for ways people can get involved in animal conservation. W. John Koolage, a professor of philosophy at Eastern Michigan University, is writing an introduction that explores the positioning of humans and animals in scientific classification systems.

My selection of animal adaptation stories is also intended to give some historical context to todays extinction crisis. The introduction of invasive species, whether deliberate or accidental, has been a part of the human story since the beginning. Rats, for example, have successfully adapted to almost every part of the planet and have frequently hitched a ride to new territories on human vessels. Some species will react favorably, in the short term, to changes in their ecosystem. Australian gray nurse sharks, for example, may be able to connect two of their populations with the warming ocean, but that accomplishment doesnt mean the species as a whole will survive massive temperature changes. My image depicts two sharks almost touching but superimposed over a stylized and artificial wave background.

While mostly about individual animals or small groups, the selected animal adaptation stories in my book have taken place all over the world. A vast majority of species have to adapt to the effects of human behavior and encroachment to some degree. These individual stories serve as a microcosm of global trends. While I have no measurable way to know if this book will have a direct effect on its audience, my hope is that it will be one of the many voices that inspire people to take action on animal conservation.

Read the original:
The Art of Animal Adaptation - Scientific American

FTSE 100 And Fortune 500 Businesses Join Forces To Tackle The Human-Centered Security Problem – Forbes

An industry-wide consultation process to find a solution to the human-centered cybersecurity puzzle ... [+] has started

Can the OutThink human-risk framework project solve the cybersecurity people puzzle?

Angela Sasse is the professor of human-centered security both at Ruhr University Bochum in Germany and London's UCL. She's also the chief scientific adviser to predictive human risk intelligence platform startup, OutThink, which recently completed a 1.2 million ($1.5 million) seed-funding round. Professor Sasse is to write the world's first comprehensive framework for the management of human risk in cybersecurity. The project, led by OutThink, will run for six months and is already starting to attract buy-in from some Fortune 500, FTSE 100 and Euronext 100 names. To succeed, however, it needs more collaboration from CISOs and security practitioners, which is why Professor Sasse is launching an industry-wide consultation process.

There's certainly little doubting that there is a human side to cybersecurity risk. You only have to read the technology news headlines whenever a major news event, such as coronavirus, strikes. The cyber-criminals looking to exploit human nature are never far behind. With phishing kits for sale that target Amazon, Apple and PayPal users, for example, the social engineering threat is now an off-the-shelf one. And that's before you start looking at other aspects of human risk.

A recent review published by the European Union Agency for Network and Information Security (ENISA) found that there were only a small number of models when it came to the behavioral aspects of cybersecurity. None of these, it concluded, were a "particularly good fit for understanding, predicting, or changing cybersecurity behavior." Indeed, the ENISA report found many ignored the context of cybersecurity behaviors and that there was evidence to support models that enabled "appropriate cybersecurity behavior" had more effect than those relying upon threat awareness training, or punishment, as drivers for more secure conduct. This was what spurred Professor Sasse to start the new initiative. "Investment in technical security measures continues to dominate the way in which CISOs attempt to manage cyber risks," Professor Sasse said, "whilst employees suffer as their productivity is hindered by limiting solutions, meaning they often circumvent security so that they can do their jobs. This framework is the perfect opportunity to right these wrongs."

OutThink human risk framework project buy-in from Vodafone Group and Centrica

Amongst those to already have expressed an interest in the OutThink project are Imogen Verret, head of security awareness at Vodafone Group. "For me, security awareness training is only the starting point," she said, adding, "Im keen to work on the project with OutThink and other security practitioners to design a solution that works for both the business and the employee."

Dexter Casey, group chief security officer at Centrica, has said that the job of a modern CISO is far from easy, which is something of an understatement. "We all know about 'people, process, tech being the three pillars of effective security," Casey said, "and make significant investment to address processes and technology, but there's a serious gap when it comes to sensible guidance on the people side of security." Casey is hopeful that the framework being discussed can provide "realistic, actionable, practical advice for CISOs so that they can solve one of their biggest problems."

I contacted another academic, Daniel Dresner, who is an acquaintance of mine and professor of cybersecurity at the University of Manchester. Professor Dresner says that when he hears that title, a comprehensive framework for the management of human risk, it sounds like another worthy attempt to deal with the challenge of cybersecurity. That it is a separate framework concerns him though, and Professor Dresner says we will continue to fail to properly address security risk because "we should adopt the attitude that there is no such thing as human error, it is just people being human," adding that "mantras of 'weakest link' and then 'strongest asset' have held us back from considering technology and people at the same time." In an email conversation with Professor Dresner, he said that as soon mention of the people side of security is made then "the tired and restrictive practice of denying technology as a solution is rolled out to protect the polarization like the courtiers' fear in 'The Emperor's New Clothes." Therefore, Professor Dresner says, the important basics of the UK National Cyber Security Centre (NCSC) Cyber Essentials, designed to help protect organizations from cyber-attack, are "sacrificed on the altar of too-simple." If considered properly, he says, "you realize that the protection they afford is proportionate, and they are not that simple when scaled up. They are," Professor Dresner concludes, "as simple as possible, but no simpler."

Ian Thornton-Trump, CISO at Cyjax, is also somewhat "pessimistic about frameworks to begin with," he says, "as anyone with a background in the National Institute of Standards and Technology (NIST) cybersecurity framework can understand it's a gargantuan task to audit, let alone implement, without substantial effort and investment across the organization." Apart, that is, for a framework which Thornton-Trump calls out as existing already: "employee morale and organizational stress." It's low morale and stress that causes mistakes or security issues related to insider behavior, Thornton-Trump says, "I wonder how many S3 buckets were made public due to mistakes by IT resources that were under stress and of low morale?" Perhaps folk just need to be better managers and champions of change, he concludes.

One experienced CISO, founder of NSC42 and chair of the Cloud Security Alliance UK chapter, Francesco Cipollone, is more enthusiastic about the opportunity the OutThink project could provide. "The NIST cybersecurity framework is being widely adopted in enterprises and SMBs," Cipollone says. While organizations have initially been focusing on NISTs pillars of identify and protect, "now there is increasing attention on the other two pillars of detect and respond," he says. So, the NIST framework provides guidance on how to detect and respond to a generic attack while the framework proposed by OutThink can focus on human risk. "A holistic view and framework focused on the risks from humans, like the insider threat or misconfiguration issues, is very much needed," Cipollone says. "The recent focus of malicious actors on social engineering in conjunction with open-source intelligence (OSINT) techniques to target the human aspect of an organization, traditionally the weakest link," he concludes, "makes this framework even more valuable."

Professor Sasse is being joined by Dr. Shorful Islam, OutThinks chief product and data officer, who has a Ph.D. in psychology and deep expertise in modeling human behavior but knows for the project to be successful more collaborators are needed. "I am glad to have the buy-in of so many esteemed security professionals," Professor Sasse said, "it validates what we are trying to do and will ensure that the framework suits the needs of the CISO. I would invite anyone else that wants to get involved to get in touch."

If you are a CISO, security practitioner or researcher, and would like to join the project, then you can visit OutThink at booth 1647F at the RSA conference in San Francisco between February 24 and 28, or by email to hello@outthinkthreats.com

Here is the original post:
FTSE 100 And Fortune 500 Businesses Join Forces To Tackle The Human-Centered Security Problem - Forbes