The Technology Thread - Part 1

I could download and store that but my monitor is nowhere near enough to display it properly.
 
Wireless AI Device Tracks and Zaps the Brain, Takes Aim at Parkinson’s

wireless_ai_device_tracks_zaps_brain_4.jpg


Zapping the brain with implanted electrodes may sound like a ridiculously dangerous treatment, but for many patients with Parkinson’s disease, deep brain stimulation (DBS) is their only relief.

The procedure starts with open-skull surgery. Guided by MRI images, surgeons implant electrodes into deep-seated brain regions that contain malfunctioning neural networks. By rapidly delivering electrical pulses, DBS can dampen — or completely quiet — the severe motor tremors that invade Parkinson’s patients’ lives.

Yet getting the best result out of DBS is an infuriating process of trial-and-error. To fit the stimulation to each patient’s needs, clinicians often repeatedly tweak the treatment’s many parameters, such as amplitude, frequency, and how long each stimulation lasts. Feedback is based on the patient’s behavioral response, which is often subjective, and as the disorder progresses a program that works today may lose its therapeutic effects tomorrow.

A major cause of all this guesswork is that current generation devices can’t record how the brain is responding to the treatment, which leaves everyone in the dark.

It’s a costly problem that’s expected to spread.

In addition to Parkinson’s, DBS is being tested as a potential treatment for obsessive-compulsive disorder, Tourette’s syndrome, treatment-resistant depression and even Alzheimer’s disease. Despite its promising results, little is known about how electrical pulses work on neural networks to change behavior.

But now, a team led by Dr. Kendall Lee at the Mayo Clinic in Rochester, MN, engineered a closed-loop, wireless device called WINCS Harmony that can simultaneously measure neurotransmitter levels from multiple brain regions and adjust its stimulation pattern accordingly in real time.

Given that neurons communicate via electrical and chemical signaling, neurotransmitter levels act as proxy for treatment efficacy. Combined with sophisticated artificial neural networks, this information fine-tunes the stimulation process automatically.

In addition, researchers also gain insight into the mysterious mechanisms behind DBS that’s eluded the field so far.

“It’s really a game-changer,” says Dr. Karen Davis, a neuroengineer at the Toronto Western Hospital, who uses DBS for pain management.

The team presented their results this week at the Neuroscience 2015, the largest annual international gathering of neuroscientists organized by the Society for Neuroscience in Chicago.

Closing The Loop

Harmony builds upon previous DBS technology that’s been already been approved by the FDA for human use.

Previous devices have tried to capture stimulation-induced neural feedback by recording the neurons’ electrical responses, says Dr. J. Luis Lujan, the lead author of the study. The problem was that the signals from the stimulating and recording electrodes heavily interfered with each other.

It’s a common and terrible problem, says Lujan, the data was far too messy to use.

Instead, the team turned to fast-scan cyclic voltammetry, a chemical sensing technique originally developed for animal research. Every 10 milliseconds or so, the device applies a local voltage charge, which transiently pulls electrons out of neurotransmitters in the area. This generates a small electrical current that can be picked up by the electrode.

Since each neurotransmitter produces a unique current signature, the recordings can both identify what type it is and estimate its concentration.

This data is then wirelessly fed into a single-layer artificial neural network, which uses the electrical and chemical patterns as feedback to tweak the weight of each node in the network. This in turn changes DBS parameters to keep the brain in an optimally functional state.

As proof-of-concept, the team tested their device on three rats by measuring local dopamine levels in a brain region called the striatum.

“Dopamine is involved in many disorders that we want to treat with DBS, such as Parkinson’s,” said Lujan, “that’s why we tried it first.” But the device can also work on other transmitters such as serotonin, which is involved in depression.

By using data from 25 stimulation trials as the training set for the artificial neural network, the team showed that the device rapidly adjusted its stimulation patterns to reach a predefined optimal level. The system was highly resistant to errors: when researchers deliberately began the stimulation using an off-target pattern, Harmony rapidly adjusted and brought itself back on course.

Looking ahead

Right now, in order for DBS to work, it has to be on 24-7, says Lujan. But patients don’t exhibit symptoms all the time, so we’re likely overstimulating the brain. Since we still don’t really understand what DBS is doing in the brain, we may be inadvertently damaging other brain functions without realizing.

Harmony may also help illuminate what’s malfunctioning in the brain at the exact time point when patients exhibit symptoms.

For example, we can observe chemical signatures in the brain of Parkinson’s patients when they display tremors and compare to when they do not. That’s extremely valuable information, says Lujan.

Similarly, Harmony may also be able to monitor the brain for telltale signs that a bipolar patient is entering a manic episode, and automatically produce the stimulation pattern needed to stop the attack before it strikes.

The results are highly promising, but it’s still a few years off before we can start testing the device in humans, admitted Lujan. The team is working on making the lunch box-sized device smaller, so that it can be directly implanted into the brain along with the electrodes. To reduce the numbers of recurrent brain surgeries, the device also has to be made more durable.

Then there’s basic neurobiology. Our current understanding of the brain networks underlying disorders such as depression is still relatively primitive.

But the team is hopeful.

Devices like Harmony are one of the best tools to help us understand how malfunctioning neural networks misfire and what DBS does to the brain, says Lujan. We’re starting with Parkinson’s disease because we know far more about the networks involved, and since the behavioral outcomes are easily observable motor symptoms, they are highly objective and easy to measure.

That doesn’t mean we’re scared to tackle the other ones though, laughed Lujan. We just want to push this to human patients as soon as possible.

“Do we have all the answers? Of course not!” he said. “But now we have the tools to figure it out.”
 
Naughty or Nice? One Brain Scan Is Now All It Takes to Find Out

naughty_or_nice_neuroscience_11.jpg


With a simple scan of your brain at rest, scientists can now guess whether — on average — you are naughty or nice.

“We have now begun to see really strong evidence of a connection between measures of brain function, connectivity and many aspects of people’s lives and personality,” says lead author Dr. Stephen Smith, a biomedical engineer at the University of Oxford.

The surprisingly strong correlations, published last week in Nature Neuroscience, are the first to emerge from the ambitious Human Connectome Project (HCP), a global effort that seeks to map all the pathways between the brain’s hundreds of regions and millions of neurons, and then to relate those connectivity patterns to personality and behavior.

“I am my connectome”

So stated Dr. Sebastian Seung, a computational neuroscientist at MIT, in a 2010 TED talk that propelled the nascent field of connectomics into the public limelight. In one sentence, Seung touched on two provocative ideas.

One was philosophical: that people’s self-identity — our personality, habits, lifestyles, memories and experiences — are stored in functional connections in our brains. Disrupt the connectome (as in cases like schizophrenia), and we lose our core identity.

The other idea was more of a scientific prophecy: by advancing brain-imaging technologies, we may be able to map out the wiring of our brains in unprecedented detail. Similar to geographical maps, which allowed explorers to venture to the edge of the world, a brain atlas may push the frontier of neuroscience forward by offering an in-depth visualization of the inner workings of our minds.

“The days of just looking at one part of the brain are waning,” says Dr. Arthur Toga at the University of Southern California, a lead scientist in the HPC project.

Launched six years ago with an initial fund of $40 million, the HPC is scanning the brains of 1,200 adults with fMRI. A major branch of the project focuses on the brain at rest — that is, when it’s not concentrating on a specific task but rather allowed to wander. These “resting state connectomes” are thought to reflect how different areas of the brain are hooked up in a “locked and loaded” state, in case a sudden future task requires them to efficiently fire together.

But there’s more: each brain scan is linked to reams of demographic and personality data summarized by hundreds of different traits. These range from objective measures such IQ test scores, attention span and socioeconomic status, to self-reported factors like life satisfaction, personality, and whether they had used drugs or shown physical aggression in the past.

It’s a treasure trove ripe for data mining. Smith and his team dug deep.

Good brains, bad brains

The team wasn’t simply interested in relating one personality or success factor to another. From the onset, explained Smith, we wanted to use a single integrated analysis, and see whether specific brain connectivity patterns are associated with specific sets of correlated traits, either good or bad.

The team took the data from 461 scans and ran a massive computer program to create an average map of the brain’s resting state across 200 different regions. Then in every participant, the scientists looked at how much those regions talked to each other, effectively charting out where the strongest links lie and what the strength of the connections are.

Finally, in a computational tour-de-force, the team added 280 different traits to the pool of brain scan data, and for each participant performed canonical correlation analysis — a type of statistical wizardry that helps unearth relationships between datasets with hundreds of complex variables.

The result was stark and stunning: the brain connectivity patterns could be aligned in a single axis, where one end was associated with positive traits — such as more education, better memory and physical performance — whereas the other with negative ones, such as rule-breaking and poor sleep quality.

People on the “positive” side of the axis also had stronger connectivity between brain networks associated with higher cognitive functions, including memory, language, introspection and imagination.

It’s incredibly impressive, says Dr. Marcus Raichle, a neuroscientist at Washington University, that scans of the brain at rest were sufficient to condense vastly variable life experiences into a single, simple axis. With one scan, you can distinguish people with successful traits, leading successful lives versus those who don’t, he says.

Obviously for each person there’s going to be an error bar in guessing who they are as a person by just looking at their brain connectivity, stressed Smith, but overall the association is very strong.

Building a good connectome

The study begs the obvious question: which way does the influence go? Are our brains’ wiring patterns leading us to success or failure in life, or are life experiences shaping the connections in our brains?

For now, says Smith, we don’t know the answer, and it’s very likely that the direction of causality goes both ways. To test causality, we would need intervention studies that impose positive traits — say, enforced education — and see whether that imprints a “good” connectome on the brain.

That’s in the future, but for now we can look for traits that correlate with positive brain connections, and tweak them in a way that pulls the brain towards the “positive” end of the axis.

Marijuana use in recent weeks, for example, was one of the factors that drove a brain more towards the “bad” end of the axis. This suggests that pot is a negative life trait that should be prioritized for further study.

But scientists caution that “good” and “bad” are relative, and must be placed into social context — is it fair to consider marijuana use for medical reasons as negative?

A person’s placement on the positive-negative scale also isn’t a proxy for general intelligence. For example, people with good hand-eye coordination tended to slide towards the “bad” side of the axis.

There’s so much that we still need to sort out, says Van Wedeen, a neuroscientist at the Massachusetts General Hospital.

This is just a taste of what’s to come — with the HCP in full swing, it’s likely that a further deluge of data — how we develop as kids, how our brains age in health and disease — will provoke more questions than answers.

But it’s a good place to start.
 
Scientists Connect Brain to a Basic Tablet—Paralyzed Patient Googles With Ease

bluetooth_brain_tablet_101.jpg


For patient T6, 2014 was a happy year.

That was the year she learned to control a Nexus tablet with her brain waves, and literally took her life quality from 1980s DOS to modern era Android OS.

A brunette lady in her early 50s, patient T6 suffers from amyotrophic lateral sclerosis (also known as Lou Gehrig’s disease), which causes progressive motor neuron damage. Mostly paralyzed from the neck down, T6 retains her sharp wit, love for red lipstick and miraculous green thumb. What she didn’t have, until recently, was the ability to communicate with the outside world.

Brain-Machine Interfaces

Like T6, millions of people worldwide have severe paralysis from spinal cord injury, stroke or neurodegenerative diseases, which precludes their ability to speak, write or otherwise communicate their thoughts and intentions to their loved ones.

The field of brain-machine interfaces blossomed nearly two decades ago in an effort to develop assistive devices to help these “locked-in” people. And the results have been fantastic: eye- or head-tracking devices have allowed eye movement to act as an output system to control mouse cursors on computer screens. In some cases, the user could also perform the click function by staring intently at a single spot, known in the field as “dwell time.”

Yet despite a deluge of promising devices, eye-tracking remains imprecise and terribly tiring to the users’ eyes. Since most systems require custom hardware, this jacks up the price of admission, limiting current technology to a lucky select few.

“We really wanted to move these assisted technologies towards clinical feasibility,” said Dr. Paul Nuyujukian, a neuroengineer and physician from Stanford University, in a talk at the 2015 Society for Neuroscience annual conference that took place this week in Chicago.

That’s where the idea of neural prostheses came in, Nuyujukian said.

In contrast to eye-trackers, neural prostheses directly interface the brain with computers, in essence cutting out the middleman — the sensory organs that we normally use to interact with our environment.

Instead, a baby-aspirin-sized microarray chip is directly implanted into the brain, and neural signals associated with intent can be decoded by sophisticated algorithms in real time and used to control mouse cursors.

It’s a technology that’s leaps and bounds from eye-trackers, but still prohibitively expensive and hard to use.

Nuyujukian’s team, together with patient T6, set out to tackle this problem.

A Nexus to Nexus 9

Two years ago, patient T6 volunteered for the BrainGate clinical trials and had a 100-channel electrode array implanted into the left side of her brain in regions responsible for movement.

At the time, the Stanford subdivision was working on a prototype prosthetic device to help paralyzed patients type out words on a custom-designed keyboard by simply thinking about the words they want to spell.

The prototype worked like this: the implanted electrodes recorded her brain activity as she looked to a target letter on the screen, passed it on to the neuroprosthesis, which then interpreted the signals and translated them into continuous control of cursor movements and clicks.

In this way, T6 could type out her thoughts using the interface, in a way similar to an elderly technophobe reluctantly tapping out messages with a single inflexible finger.

The black-and-white setup was state-of-the-art in terms of response and accuracy. But the process was painfully slow, and even with extensive training, T6 often had to move her eyes to the delete button to correct her errors.

What the field needed was a flexible, customizable and affordable device that didn’t physically connect to a computer via electrodes, according to Nuyujukian. We also wanted a user interface that didn’t look like it was designed in the 80s.

The team’s breakthrough moment came when they realized their point-and-click cursor system was similar to finger tapping on a touchscreen, something most of us do everyday.

We were going to design our own touchscreen hardware, but then realized the best ones were already on the market, laughed Nuyujukian, so we went on Amazon instead and bought a Nexus 9 tablet.

The team took their existing setup and reworked it so that patient T6’s brain waves could control where she tapped on the Nexus touchscreen. It was a surprisingly easy modification: the neuroprosthetic communicated with the tablet through existing Bluetooth protocols, and the system was up and running in less than a year.

“Basically the tablet recognized the prosthetic as a wireless Bluetooth mouse,” explained Nuyujukian. We pointed her to a web browser app and told her to have fun.

In a series of short movie clips, the team demonstrated patient T6 Googling questions about gardening, taking full advantage of the autocompletion feature to speed up her research. T6 had no trouble navigating through tiny links and worked the standard QWERTY keyboard efficiently.

Think about it, said Nuyujukian, obviously excited. It’s not just a prettier user interface; she now has access to the entire Android app store.

According to previous studies, the device can function at least two years without experiencing any hardware or software issues. The team is trying to make the implant even sturdier to extend its lifespan in the brain.

“We set out to utilize what’s already been perfected in terms of the hardware to make the experience more pleasant,” said Nuyujukian. “We’ve now showed that we can expand the scope of our system to a standard tablet.”

But the team isn’t satisfied. They are now working on ways to implement click-and-drag and multi-touch maneuvers. They also want to expand to other operating systems, enable the patients to use the device 24/7 without supervision, and expand their pilot program to more patients in all three of the BrainGate clinical sites.

“Our goal is to unlock the full user interface common to general-purpose computers and mobile devices,” said Nuyujukian. “This is a first step towards developing a fully-capable brain-controlled communication and computer interface for restoring function for people with paralysis.”
 
Neat, jaganar. Thank you for sharing that. He was a great human being. :)
 
A Genomics Revolution: Evolution by Natural Selection to Evolution by Intelligent Direction

Humanity is moving from evolution by natural selection (Darwinism) to evolution by intelligent direction.

For most of human history, our average age was only about 26 years old.

We would procreate at age 13, live just long enough to help our children raise their children, and then, on average, die at age 26 (so we were no longer taking food from the mouths of our grandchildren).

It was through technological innovation — sanitation and germ theory — that we moved life expectancy from 26 to the mid 50s. Recently, because of modern medicine's progress in treating heart disease and cancer, we've bumped up today's global average human lifespan to 71 years.

But this is just the beginning.

Advances over the next 10 to 15 years will move life expectancy north of 100.

This post is about advances in reading, writing, and building elements of the human body.

C. Lee really would have been ancient back then. :oldrazz:
 
Brain-Controlling Sound Waves Used to Steer Genetically Modified Worms

sound_wave_mind_control_4.jpg


Move over optogenetics, there’s a new cool mind-bending tool in town.

A group of scientists, led by Dr. Sreekanth Chalasani at the Salk Institute in La Jolla, California, discovered a new way to control neurons using bursts of high-pitched sound pulses in worms.

Dubbed “sonogenetics,” scientists say the new method can control brain, heart and muscle cells directly from outside the body, circumventing the need for invasive brain implants such as the microfibers used in optogenetics.

Ya got worms?
 
Nicola Tesla. Poor guy gets praised and they forget to name check the guy.
 
Super storm Sandy gets Stevens Institute of Technology to design Solar Powered House that should withstand 6 foot high floods.
 
Beware of ads that use inaudible sound to link your phone, TV, tablet, and PC

All the more reason for people to distrust advertisements. It is increasingly obvious how unethical and even illegal some of these advertising agencies have become.

Privacy advocates are warning federal authorities of a new threat that uses inaudible, high-frequency sounds to surreptitiously track a person's online behavior across a range of devices, including phones, TVs, tablets, and computers.

The ultrasonic pitches are embedded into TV commercials or are played when a user encounters an ad displayed in a computer browser. While the sound can't be heard by the human ear, nearby tablets and smartphones can detect it. When they do, browser cookies can now pair a single user to multiple devices and keep track of what TV commercials the person sees, how long the person watches the ads, and whether the person acts on the ads by doing a Web search or buying a product.

Cross-device tracking raises important privacy concerns, the Center for Democracy and Technology wrote in recently filed comments to the Federal Trade Commission. The FTC has scheduled a workshop on Monday to discuss the technology. Often, people use as many as five connected devices throughout a given day—a phone, computer, tablet, wearable health device, and an RFID-enabled access fob. Until now, there hasn't been an easy way to track activity on one and tie it to another.

"As a person goes about her business, her activity on each device generates different data streams about her preferences and behavior that are siloed in these devices and services that mediate them," CDT officials wrote. "Cross-device tracking allows marketers to combine these streams by linking them to the same individual, enhancing the granularity of what they know about that person."

The officials said that companies with names including SilverPush, Drawbridge, and Flurry are working on ways to pair a given user to specific devices. Adobe is also developing cross-device tracking technologies, although there's no mention of it involving inaudible sound. Without a doubt, the most concerning of the companies the CDT mentioned is San Francisco-based SilverPush.

CDT officials wrote:
Cross-device tracking can also be performed through the use of ultrasonic inaudible sound beacons. Compared to probabilistic tracking through browser fingerprinting, the use of audio beacons is a more accurate way to track users across devices. The industry leader of cross-device tracking using audio beacons is SilverPush. When a user encounters a SilverPush advertiser on the web, the advertiser drops a cookie on the computer while also playing an ultrasonic audio through the use of the speakers on the computer or device. The inaudible code is recognized and received on the other smart device by the software development kit installed on it. SilverPush also embeds audio beacon signals into TV commercials which are "picked up silently by an app installed on a [device] (unknown to the user)." The audio beacon enables companies like SilverPush to know which ads the user saw, how long the user watched the ad before changing the channel, which kind of smart devices the individual uses, along with other information that adds to the profile of each user that is linked across devices.

The user is unaware of the audio beacon, but if a smart device has an app on it that uses the SilverPush software development kit, the software on the app will be listening for the audio beacon and once the beacon is detected, devices are immediately recognized as being used by the same individual. SilverPush states that the company is not listening in the background to all of the noises occurring in proximity to the device. The only factor that hinders the receipt of an audio beacon by a device is distance and there is no way for the user to opt-out of this form of cross-device tracking. SilverPush’s company policy is to not "divulge the names of the apps the technology is embedded," meaning that users have no knowledge of which apps are using this technology and no way to opt-out of this practice. As of April of 2015, SilverPush’s software is used by 67 apps and the company monitors 18 million smartphones.
SilverPush's ultrasonic cross-device tracking was publicly reported as long ago as July 2014. More recently, the company received a new round of publicity when it obtained $1.25 million in venture capital. The CDT letter appears to be the first time the privacy-invading potential of the company's product has been discussed in detail. SilverPush officials didn't respond to e-mail seeking comment for this article.

Cross-device tracking already in use


The CDT letter went on to cite articles reporting that cross-device tracking has been put to use by more than a dozen marketing companies. The technology, which is typically not disclosed and can't be opted out of, makes it possible for marketers to assemble a shockingly detailed snapshot of the person being tracked.

"For example, a company could see that a user searched for sexually transmitted disease (STD) symptoms on her personal computer, looked up directions to a Planned Parenthood on her phone, visits a pharmacy, then returned to her apartment," the letter stated. "While previously the various components of this journey would be scattered among several services, cross-device tracking allows companies to infer that the user received treatment for an STD. The combination of information across devices not only creates serious privacy concerns, but also allows for companies to make incorrect and possibly harmful assumptions about individuals."

Use of ultrasonic sounds to track users has some resemblance to badBIOS, a piece of malware that a security researcher said used inaudible sounds to bridge air-gapped computers. No one has ever proven badBIOS exists, but the use of the high-frequency sounds to track users underscores the viability of the concept.

Now that SilverPush and others are using the technology, it's probably inevitable that it will remain in use in some form. But right now, there are no easy ways for average people to know if they're being tracked by it and to opt out if they object. Federal officials should strongly consider changing that.
Ars Technica
 
Failed Windows 3.1 system blamed for shutting down Paris airport

And now a very, very retro story about the amazingly still in use Windows 3.1 shutting down a modern international airport.

Paris Orly airport had to close temporarily last Saturday after the failure of a system running Windows 3.1—yes, the operating system from 1992—left it unable to operate in fog.

French satirical weekly Le Canard Enchaîné reported the failure, and Vice expanded on the claims.

Orly uses a system called DECOR to communicate Runway Visual Range (RVR) information to pilots. In poor weather conditions—such as the fog the airport experienced on Saturday—this system is essential. Last Saturday it stopped working, and the airport struggled to figure out why.

This use of ancient systems is apparently not unusual. Vice quotes Alexandre Fiacre, the secretary general of France's UNSA-IESSA air traffic controller union, as saying that "The tools used by Aéroports de Paris controllers run on four different operating systems, that are all between 10 and 20 years old," with Windows 3.1 being joined by Windows XP and unspecified UNIX systems. Fiacre says that the systems are poorly maintained as well. Moreover, the age of these systems means that it's hard to find staff who can work with them, with Fiacre explaining, "We are starting to lose the expertise [to deal] with that type of operating system. In Paris, we have only three specialists who can deal with DECOR-related issues." And this problem is getting worse as "One of them is retiring next year, and we haven't found anyone to replace him."

Le Canard Enchaîné writes that according to France's transport minister, airport systems will be upgraded by 2017. Fiacre, however, is unconvinced, saying it will be "2019 at the earliest, perhaps even in 2021."

Whenever the upgrade does happen, they might even switch to something made this century.
Ars Technica
 

Users who are viewing this thread

Back
Top
monitoring_string = "afb8e5d7348ab9e99f73cba908f10802"