243 results for “Hyperreality”

MIT’s new voiceless interface can read the words in your head

Vanessa Bates Ramirez
October 17th 2018

The way we interact with the technology in our lives is getting progressively more seamless. If typing terms or addresses into your phone wasn’t easy enough, now you can just tell Siri to do the search or pull up the directions for you. Don’t feel like getting off the couch to flick a switch, or want your house to be lit up by the time you pull into your driveway? Just tell your Echo home assistant what you want, and …

Ad Blockers Go Real Life

Van Mensvoort
November 3rd 2017
The Brand Killer augmented reality headset boomerangs ad blocking into the physical realm.

Cyborg Cabin Crew

Ruben Baart
May 26th 2017
Air New Zealand is equipping its flight attendants with AR headsets to explore in-flight optimization, giving its passengers a glimpse of what the future air travel service might look like.

Experience Death with VR

Julie Reindl
May 10th 2017
Virtual reality experiment tries to help people frightened of death with outer-body experience.

New AI Model Can Mimic Any Human Voice

Elle Zhan Wei
May 6th 2017
Lyrebird is an AI model capable of synthesizing anyone’s voice from just a one-minute audio sample.

Turn Your Smartphone into a Microscope

Ruben Baart
January 1st 2017
Discover the cellular world with stick-on camera lenses for your smartphone.

Waiter, There’s a Pig in My Soup!

Monika Kozub
October 6th 2016
A miniature piglet staring at you from a soup, to remind you the origins of your meal.

Pokémon Go Improves Players’ Mental Health

Ruben Baart
July 21st 2016
Pokémon Go is shaping social relationships amongst individuals.

Men Pregnancy: Reality or Science-fiction?

Margherita Olivo
January 9th 2016
The first uterus transplant in the United States raises the question: could uterus transplants allow men to get pregnant too?

Smog: an Augmented Reality?

Hendrik-Jan Grievink
December 12th 2015

A few days ago, these images of iconic buildings in Beijing as they look with and without intense smog have been posted on Weibo, one of China’s most popular social media platforms. Interestingly, these images speak the visual language of augmented reality apps, in which an additional layer of information is projected on top of the perceptible environment as seen through the lens of a camera, usually on a hand-held device. But in this particular case, an interesting reversal seems to take place.…

WP_Query Object ( [query] => Array ( [tag] => hyperreality [post_type] => post [post_status] => publish [orderby] => date [order] => DESC [category__not_in] => Array ( [0] => 1 )[numberposts] => 10 [suppress_filters] => )[query_vars] => Array ( [tag] => hyperreality [post_type] => post [post_status] => publish [orderby] => date [order] => DESC [category__not_in] => Array ( [0] => 1 )[numberposts] => 10 [suppress_filters] => [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [cat] => [tag_id] => 98 [author] => [author_name] => [feed] => [tb] => [paged] => 0 [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( )[category__and] => Array ( )[post__in] => Array ( )[post__not_in] => Array ( )[post_name__in] => Array ( )[tag__in] => Array ( )[tag__not_in] => Array ( )[tag__and] => Array ( )[tag_slug__in] => Array ( [0] => hyperreality )[tag_slug__and] => Array ( )[post_parent__in] => Array ( )[post_parent__not_in] => Array ( )[author__in] => Array ( )[author__not_in] => Array ( )[ignore_sticky_posts] => [cache_results] => 1 [update_post_term_cache] => 1 [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 10 [nopaging] => [comments_per_page] => 50 [no_found_rows] => )[tax_query] => WP_Tax_Query Object ( [queries] => Array ( [0] => Array ( [taxonomy] => category [terms] => Array ( [0] => 1 )[field] => term_id [operator] => NOT IN [include_children] => )[1] => Array ( [taxonomy] => post_tag [terms] => Array ( [0] => hyperreality )[field] => slug [operator] => IN [include_children] => 1 ))[relation] => AND [table_aliases:protected] => Array ( [0] => wp_term_relationships )[queried_terms] => Array ( [post_tag] => Array ( [terms] => Array ( [0] => hyperreality )[field] => slug ))[primary_table] => wp_posts [primary_id_column] => ID )[meta_query] => WP_Meta_Query Object ( [queries] => Array ( )[relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( )[clauses:protected] => Array ( )[has_or_relation:protected] => )[date_query] => [queried_object] => WP_Term Object ( [term_id] => 98 [name] => Hyperreality [slug] => hyperreality [term_group] => 0 [term_taxonomy_id] => 101 [taxonomy] => post_tag [description] => A simulation of something that never actually existed, in other words, an authentic fake. Usually used to refer to movies, video games, or highly designed environments like Disneyland, Las Vegas or Dubai. [parent] => 0 [count] => 243 [filter] => raw [term_order] => 0 )[queried_object_id] => 98 [request] => SELECT SQL_CALC_FOUND_ROWS wp_posts.ID FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) WHERE 1=1 AND ( wp_posts.ID NOT IN ( SELECT object_id FROM wp_term_relationships WHERE term_taxonomy_id IN (1) ) AND wp_term_relationships.term_taxonomy_id IN (101) ) AND wp_posts.post_type = 'post' AND ((wp_posts.post_status = 'publish')) GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10 [posts] => Array ( [0] => WP_Post Object ( [ID] => 91451 [post_author] => 1790 [post_date] => 2018-10-17 12:02:17 [post_date_gmt] => 2018-10-17 11:02:17 [post_content] =>

The way we interact with the technology in our lives is getting progressively more seamless. If typing terms or addresses into your phone wasn’t easy enough, now you can just tell Siri to do the search or pull up the directions for you. Don’t feel like getting off the couch to flick a switch, or want your house to be lit up by the time you pull into your driveway? Just tell your Echo home assistant what you want, and presto—lights on.

Engineers have been working on various types of brain-machine interfaces to take this seamlessness one step further, be it by measuring activity in the visual cortex to recreate images, or training an algorithm to "speak" for paralyzed patients based on their brain activation patterns.

At the Association for Computing Machinery’s ACM Intelligent User Interface conference in Tokyo, a team from MIT Media Lab unveiled AlterEgo, a wearable interface that "reads" the words users are thinking—without the users having to say anything out loud.

[embed]https://youtu.be/RuUSc53Xpeg[/embed]

If you thought Google Glass was awkward-looking, AlterEgo’s not much sleeker; the tech consists of a white plastic strip that hooks over the ear and extends below the jaw, with an additional attachment placed just under the wearer’s mouth. The strip contains electrodes that pick up neuromuscular signals, which are released when the user thinks of a certain word, silently "saying" it inside his or her head. A machine learning system then interprets the signals and identifies which words the user had in mind—and, amazingly, it does so correctly 92 percent of the time.

Arnav Kapur, a graduate student who led AlterEgo’s development, said, “The motivation for this was to build an IA device—an intelligence-augmentation device. Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

It’s Not All in Your Head

Who knew your face made specific, teeny muscle movements when you think? Isn’t that the fun of it, that there’s no way anyone but you can know what's in your head?

It turns out we have a system that prepares for physical speech; it's active even when we don't say anything out loud, and the preparation extends all the way to our muscles, which give off myoelectric signals based on what they think we're about to say.

To figure out which areas of our faces give off the strongest neuromuscular signals related to speech, the MIT team had test subjects think of and silently say (also called “subvocalize”) a sequence of words four times, with a group of 16 electrodes placed on different parts of subjects’ faces each time.

Analysis of the resulting data showed that signals from seven specific electrode locations best deciphered subvocalized words. The team fed the data to a neural network, which was able to identify patterns between certain words and the signals AlterEgo had picked up.

More Than Words

Thus far, the system’s abilities are limited to fairly straightforward words; the researchers used simple math problems and chess moves to collect initial data, with the range of users’ vocabularies limited to about 20 possible words. So while its proof of concept is pretty amazing, AlterEgo has a ways to go before it will be able to make out all your thoughts. The tech’s developers are aiming to expand its capabilities, though, and their future work will focus on collecting data for more complex words and conversations.

What’s It For?

While technologies like AlterEgo can bring convenience to our lives, we should stop and ask ourselves how much intrusiveness we’re willing to allow in exchange for just that—convenience, as opposed to need. Do I need to have electrodes read my thoughts while I’m, say, grocery shopping in order to get the best deals, or save the most time? Or can I just read price tags and walk a little faster?

When discussing the usefulness of the technology, Pattie Maes, a professor of media arts and sciences at MIT and Kapur’s thesis advisor, mentioned the inconvenience of having to take out your phone and look something up during a conversation. “My students and I have been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present,” she said.

Thad Starner is a professor at Georgia Tech’s College of Computing. He wasn't involved in AlterEgo’s creation, but he's done a lot of work in wearable tech and was closely involved with Google Glass. Starner had some ideas about more utilitarian applications for AlterEgo, pointing out that in high-noise environments, such as on an airport’s tarmac, on the flight deck of an aircraft carrier, or in power plants or printing presses, the system would “be great to communicate with voice in an environment where you normally wouldn’t be able to.”

Starner added, “This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.” He also mentioned the tech would be useful for special operations and the disabled.

Gearing research for voiceless interfaces like AlterEgo towards these practical purposes would likely up support for the tech, while simultaneously taming fears of Orwellian mind-reading and invasions of mental privacy. It’s a conversation that will get louder—inside engineers’ heads and out—as progress in the field advances.

Image Credit: Lorrie Lejeune / MIT

This article originally appeared on Singularity Hub, a publication of Singularity University.

[post_title] => MIT's new voiceless interface can read the words in your head [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => alter-ego [to_ping] => [pinged] => [post_modified] => 2018-12-10 14:18:27 [post_modified_gmt] => 2018-12-10 13:18:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=91451 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[1] => WP_Post Object ( [ID] => 77653 [post_author] => 4 [post_date] => 2017-11-03 10:00:42 [post_date_gmt] => 2017-11-03 08:00:42 [post_content] => Hate being bombarded with advertising? Many people already use an ad blocker in their browser or smartphone to avoid commercial messages. The Brand Killer boomerangs ad blocking into the physical realm.The augmented reality headset, made by a team of students from the University of Pennsylvania, automatically recognizes brand logos and blurs them in the perception of the viewer - whether it's on a Starbucks cup, a sneaker or a billboard.The prototype headset is still clunky and won't be very practical in everyday live. Nevertheless, it makes us aware that not only our digital virtual environment, but also our physical environment is highly mediated with all kinds of branded messages.Thanks Selby  [post_title] => Ad Blockers Go Real Life [post_excerpt] => The Brand Killer augmented reality headset boomerangs ad blocking into the physical realm. [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => brand-killer-ad-blockers-real-life [to_ping] => [pinged] => [post_modified] => 2017-10-24 11:28:50 [post_modified_gmt] => 2017-10-24 09:28:50 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=77653/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[2] => WP_Post Object ( [ID] => 75102 [post_author] => 873 [post_date] => 2017-05-26 19:00:51 [post_date_gmt] => 2017-05-26 17:00:51 [post_content] => Fasten your seat belts, Air New Zealand is equipping its flight attendants with augmented reality headsets to explore in-flight optimization. The project, currently in beta testing, uses Microsoft’s Hololens technology to improve inflight customer service, giving passengers a glimpse of what the future of air travel might look like.Nowadays there are more people flying in airplanes (at this very moment) than were alive in the Stone Age - and each one of those flying passengers have specific needs. By placing data-driven layers of information on top of the personnel’s view, this technology enhances flight attendants' duties by aggregating and displaying key customer information directly in front of them, including emotional state.The cabin crew will have direct access to the passengers' meal choice and allergies, onward flight plans, loyalty membership details, name and city of provenance. Basically, the sky is the limit and resides in ultimate comfort. The hostess and steward of the future are better, faster and visually enhanced.[youtube]https://www.youtube.com/watch?v=zD1bsl3AEeI[/youtube]Source: MSPoweruser [post_title] => Cyborg Cabin Crew [post_excerpt] => Air New Zealand is equipping its flight attendants with AR headsets to explore in-flight optimization, giving its passengers a glimpse of what the future air travel service might look like. [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => flight-attendants-with-augmented-reality-headsets [to_ping] => [pinged] => [post_modified] => 2017-05-31 10:38:52 [post_modified_gmt] => 2017-05-31 08:38:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=75102/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[3] => WP_Post Object ( [ID] => 74340 [post_author] => 1317 [post_date] => 2017-05-10 08:25:27 [post_date_gmt] => 2017-05-10 06:25:27 [post_content] => What it's like to be dead? That's a question we humans cannot answer until we are there, at the end of our lives. The big unknown. But this virtual reality experience developed at the University of Barcelona might get you a glimpse into how it could feel like to be dead, and with that eventually reduce the angst of leaving this earth.The experience was tested on 32 volunteers, the idea is to make users think that the virtual body perceived through the VR headset is their own. While wearing the headset, the illusion of living in the virtual body was built step by step, by training the computer generated body to match all the movements the physical body performed. A hit against the simulated head would be synched by a real hit on the person's actual head. The real time synchronisation is comparable to the so called rubber hand illusion, a trick often practiced by scientist, using fake body parts in order to explore how the mind, associates information gained from the senses to create a feeling of body ownership. An example would be a person with a missing limb, thinking the part is still attached even if it's physically impossible.After believing the illusion, the test people watched their own body switch into a different perspective. They had the feeling of floating out of the virtually body and looking down, from a higher perspective. The researchers would then drop balls on the virtual representations of the volunteers while they were looking down from above, activating the vibrators on only half of them. Those who received vibrations still felt connected to their bodies, while those who didn't felt disconnected and said the experience reduced their fear of dying.Even though this VR experience might not completely remove our fear of death, it tells something about the connection between a person's consciousness and his/her physical body. "It gives a sense that it’s possible to survive beyond death" Mel Slater, team leader of the project, said. The so-called outer-body-experience is nothing new and has been often reported by patients surviving a heart attack or a coma. Slater hopes this project could help terminally ill patients or people suffering from a life impairing death phobia.This experiment shows again how we can use technology to simulate our notion of reality, luckily in this case there is the exit button to go back to real life.Source: Endgaged. Image: VR Scout [post_title] => Experience Death with VR [post_excerpt] => Virtual reality experiment tries to help people frightened of death with outer-body experience. [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => experience-death-virtual-reality [to_ping] => [pinged] => https://vrscout.com/projects/an-out-of-body-vr-experience/ [post_modified] => 2017-05-12 12:37:42 [post_modified_gmt] => 2017-05-12 10:37:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=74340/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[4] => WP_Post Object ( [ID] => 74444 [post_author] => 1324 [post_date] => 2017-05-06 11:12:40 [post_date_gmt] => 2017-05-06 09:12:40 [post_content] => A Recent announcement from a Canadian start-up is stirring up the media. They introduced a AI model capable of synthesizing a person's voice from just a one-minute audio sample. In other words, you can get anyone to say anything you want.The system, named Lyrebird after the Australian bird, relies on deep learning models developed by the University of Montréal, where the tech startup is based. Initially the developers worked on a research paper that looked at using neural networks to generate audio from a series of samples. This study later became the basis for their model for speech synthesis. They state Lyrebird can “compress voice DNA into a unique key and use this key to generate anything with its corresponding voice”. It does it at the rate of 1000 sentences in less than half a second. It even allows to control the emotions of the speech, such as sympathy, anger, whatever suits your mood.TechCrunch called the technology a “voice mimic for the fake news era”, while The Inquirer defined the company a “sinister startup”. Lyrebird replied with a press release addressed to the developers around the world, wishing to raise awareness on the existence of such technology, and questioning the reliability of audio evidence in courts or other for other uses.Source: Clique. Image: InsideHook [post_title] => New AI Model Can Mimic Any Human Voice [post_excerpt] => Lyrebird is an AI model capable of synthesizing anyone’s voice from just a one-minute audio sample. [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => lyrebird-api-copies-human-voice [to_ping] => [pinged] => [post_modified] => 2017-05-06 11:12:40 [post_modified_gmt] => 2017-05-06 09:12:40 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=74444/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[5] => WP_Post Object ( [ID] => 69700 [post_author] => 873 [post_date] => 2017-01-01 16:00:25 [post_date_gmt] => 2017-01-01 15:00:25 [post_content] => Honey, I shrunk the… microscope? Blips is a set of mini-objectives for smartphone cameras that allows its users to see deep inside the micro-world and explore the environment on a cellular level. By simply sticking the lens to your smartphone or tablet, you can discover flowers, insects, computer chips, cooked rice and more with up to 100x magnification, obtaining entirely new sense of your surroundings.Source: Blips [post_title] => Turn Your Smartphone into a Microscope [post_excerpt] => Discover the cellular world with stick-on camera lenses for your smartphone. [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => turn-smartphone-microscope [to_ping] => [pinged] => [post_modified] => 2016-12-30 18:33:48 [post_modified_gmt] => 2016-12-30 17:33:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=69700 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[6] => WP_Post Object ( [ID] => 67047 [post_author] => 875 [post_date] => 2016-10-06 16:00:51 [post_date_gmt] => 2016-10-06 14:00:51 [post_content] => Japanese are well known for their food replicas made to attract customers in restaurants. Polymer pasta, plastic lettuce or resin sausage became a common element of the display. A miniature piglet staring at you from a soup is a whole other thing.The intention: to show clients the origin of their meat piece, is crystal clear but it provokes all spectrum of reactions. Some people praise an honest attitude which reminds consumers that their meat was an actual living creature once. Others think that what this Japanese restaurant proposed is just disgusting and inappropriate. A fake meal that brought up some real issues.piglet_mealSource: Daily Mail. Photo: EuroPics [CEN] [post_title] => Waiter, There's a Pig in My Soup! [post_excerpt] => A miniature piglet staring at you from a soup, to remind you the origins of your meal. [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => waiter-theres-pig-soup [to_ping] => [pinged] => [post_modified] => 2016-10-20 11:36:13 [post_modified_gmt] => 2016-10-20 09:36:13 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=66203 [menu_order] => 51 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[7] => WP_Post Object ( [ID] => 64907 [post_author] => 873 [post_date] => 2016-07-21 12:30:10 [post_date_gmt] => 2016-07-21 10:30:10 [post_content] => The launch of augmented reality mobile game Pokémon Go certainly did not go unnoticed. The world basically went crazy for it. The game “broke the Internet” and troops of trainers entered the augmented arena, all determined to become Pokémon masters. An avalanche of media attention is what followed.It has been twenty years since the first release of Pokémon for Nintendo Gameboy. As a matter of fact, the augmented video game is the first off-Nintendo release that has one-on-one copy for mobile. With over 50.000 downloads within its first 24 hours, the game grew more popularity than some of the top apps available. On average a user plays Pokémon Go 43 minutes and 23 seconds a day, passing Whatsapp, Instagram, Snapchat, and Messenger in daily usage. It seems that this game brings about a new social medium, literally.The alternative reality presented in Pokémon Go is particularly interesting because it can shape social relationships among individuals. Entailing a generation bond that exceeds the surface of our screens, the game brings people together in the real world. Additionally, Pokémon Go has provided unexpected benefits for certain players suffering from mental health conditions. Around the world, users have tweeted how the game has aided them in relieving their depression and social anxiety symptoms.According to Stephen Buckley, head of information for the mental health charity Mind, “Outdoor activities can have huge benefits for wellbeing, and can even be as effective as antidepressants in treating mild to moderate depression and anxiety”. However, Dr. Suzanne Gage, senior research associate at Bristol University’s School of Experimental Psychology, takes a more skeptical stance to the medical potential of augmented reality: “Of course, it’s great if people find it helps them, as individuals, and gets them out of the house […] But can computer games change a person’s mental health? It’s far too early to tell”.The launch of Pokémon Go is most certainly revolutionizing the industry, unlocking the full potential of alternative reality as healing and communities bonding tool. Join the astonishing fantasy of the 151 Pokémon friends and meet another 151 new friends in real life.Sources: Brooklyn Magazine, The Independent. Image: Reuters [post_title] => Pokémon Go Improves Players' Mental Health [post_excerpt] => Pokémon Go is shaping social relationships amongst individuals. [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => pokemon-go-curing-depression [to_ping] => [pinged] => [post_modified] => 2016-07-22 11:23:23 [post_modified_gmt] => 2016-07-22 09:23:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=64907 [menu_order] => 159 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[8] => WP_Post Object ( [ID] => 60031 [post_author] => 864 [post_date] => 2016-01-09 16:43:44 [post_date_gmt] => 2016-01-09 15:43:44 [post_content] => The Cleveland Clinic in Ohio is working to perform the first uterus transplant in the United States. The surgery would give a woman born without a uterus the chance to have a child. This raises the question: could uterus transplants allow men to get pregnant?The Cleveland Clinic operation can also apply to women with uterus malfunctions and women whose uterus has been removed for medical problems. If this procedure will be easier and more common in the future it would give hope for an estimated 50,000 women (in the US only) who has problems with their uterus.Some of these surgeries have already been tested in Sweden with a discrete success, the donors were fine and the research team registered at least four births. The first transplant, however, will only be temporary, the uterus will be removed only after the second pregnancy to interrupt and limit the anti-rejection therapy. But unlike the Swedish tests in which they used live donors, in this case the organs will be taken from deceased women to avoid complications for the donors.With this being said, does it mean that men will be able to get pregnant too? Remember in the fifth episode of Star Trek: Enterprise when Commander Trip Tucker has been impregnated by an alien species? We have also seen it in various feminist science-fiction novels (Marge Piercy's Woman on the Edge of Time or Ursula K. Le Guin's The Left Hand of Darkness). They were picturing advanced future societies where the experience of pregnancy is not only limited to women. But this is not science-fiction and of course we are very far from a society in which this is considered routine.ku-xlargeRebecca Flyckt, an obstetrician-gynecologist at the Cleveland Clinic, commented as follows: "Although theoretically this would be possible, it would be a huge surgical and endocrinologic undertaking and involve not just the creation of a vagina but also surgical reconstruction of the whole pelvis by someone skilled in transgender surgery. After this procedure and the grafting of a donor uterus - she continued - a complex hormone regimen would be required to support a pregnancy prior to and after embryo transfer. Instead of using the patient's eggs, doctors would use his sperm and a eggs from his partner or a donor. He would have to have his sperm frozen before the surgery, but this is pretty routine".Source: The New York Times, Business Insider UKImages: Shutterstock, io9 [post_title] => Men Pregnancy: Reality or Science-fiction? [post_excerpt] => The first uterus transplant in the United States raises the question: could uterus transplants allow men to get pregnant too? [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => uterus-transplant-new-hope-women [to_ping] => [pinged] => [post_modified] => 2016-01-09 16:43:44 [post_modified_gmt] => 2016-01-09 15:43:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=60031 [menu_order] => 399 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[9] => WP_Post Object ( [ID] => 59217 [post_author] => 7 [post_date] => 2015-12-12 11:50:47 [post_date_gmt] => 2015-12-12 10:50:47 [post_content] => A few days ago, these images of iconic buildings in Beijing as they look with and without intense smog have been posted on Weibo, one of China’s most popular social media platforms. Interestingly, these images speak the visual language of augmented reality apps, in which an additional layer of information is projected on top of the perceptible environment as seen through the lens of a camera, usually on a hand-held device. But in this particular case, an interesting reversal seems to take place.Beijing, the capital of the People’s Republic of China, is known for many things, but not necessarily for its bright blue sky (although it surely exists). Especially in winter, pollution can swipe clouds of smog, covering the whole city in a grayish haze – the infamous Beijing Blur (a term coined by your humble editor during one of his visits to the PRC).Earlier this week, the pollution reached such dramatic levels that, for the first time ever, a pollution red alert has been declared during what is sometimes referred to as the ‘Airpocalypse’. The grey, choking smog is a man-made phenomena yet clearly beyond human control, therefore an excrescence of our next nature. It could also be understood as an additional layer of information on top of our built environment, an augmented reality that says: dear humans, think again about how you want to build your habitats in the future. Some kind of accidental (and unintended) information design?The images seen on this smartphone are clearly a higly mediated representation of reality. They show us another version of the same thing and in doing so, present themselves as the additional layer, the augmented reality on top of the non-augmented one. But in fact, they should be read as a substracted reality, a reality minus a layer of augmentation. Here, the smog is the augmented reality of next nature. But is this the reality we want?Images posted on Weibo by user @飘在英伦 [post_title] => Smog: an Augmented Reality? [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => smog-augmented-reality [to_ping] => [pinged] => [post_modified] => 2015-12-17 15:33:30 [post_modified_gmt] => 2015-12-17 14:33:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=59217 [menu_order] => 420 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 ))[post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 91451 [post_author] => 1790 [post_date] => 2018-10-17 12:02:17 [post_date_gmt] => 2018-10-17 11:02:17 [post_content] =>

The way we interact with the technology in our lives is getting progressively more seamless. If typing terms or addresses into your phone wasn’t easy enough, now you can just tell Siri to do the search or pull up the directions for you. Don’t feel like getting off the couch to flick a switch, or want your house to be lit up by the time you pull into your driveway? Just tell your Echo home assistant what you want, and presto—lights on.

Engineers have been working on various types of brain-machine interfaces to take this seamlessness one step further, be it by measuring activity in the visual cortex to recreate images, or training an algorithm to "speak" for paralyzed patients based on their brain activation patterns.

At the Association for Computing Machinery’s ACM Intelligent User Interface conference in Tokyo, a team from MIT Media Lab unveiled AlterEgo, a wearable interface that "reads" the words users are thinking—without the users having to say anything out loud.

[embed]https://youtu.be/RuUSc53Xpeg[/embed]

If you thought Google Glass was awkward-looking, AlterEgo’s not much sleeker; the tech consists of a white plastic strip that hooks over the ear and extends below the jaw, with an additional attachment placed just under the wearer’s mouth. The strip contains electrodes that pick up neuromuscular signals, which are released when the user thinks of a certain word, silently "saying" it inside his or her head. A machine learning system then interprets the signals and identifies which words the user had in mind—and, amazingly, it does so correctly 92 percent of the time.

Arnav Kapur, a graduate student who led AlterEgo’s development, said, “The motivation for this was to build an IA device—an intelligence-augmentation device. Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

It’s Not All in Your Head

Who knew your face made specific, teeny muscle movements when you think? Isn’t that the fun of it, that there’s no way anyone but you can know what's in your head?

It turns out we have a system that prepares for physical speech; it's active even when we don't say anything out loud, and the preparation extends all the way to our muscles, which give off myoelectric signals based on what they think we're about to say.

To figure out which areas of our faces give off the strongest neuromuscular signals related to speech, the MIT team had test subjects think of and silently say (also called “subvocalize”) a sequence of words four times, with a group of 16 electrodes placed on different parts of subjects’ faces each time.

Analysis of the resulting data showed that signals from seven specific electrode locations best deciphered subvocalized words. The team fed the data to a neural network, which was able to identify patterns between certain words and the signals AlterEgo had picked up.

More Than Words

Thus far, the system’s abilities are limited to fairly straightforward words; the researchers used simple math problems and chess moves to collect initial data, with the range of users’ vocabularies limited to about 20 possible words. So while its proof of concept is pretty amazing, AlterEgo has a ways to go before it will be able to make out all your thoughts. The tech’s developers are aiming to expand its capabilities, though, and their future work will focus on collecting data for more complex words and conversations.

What’s It For?

While technologies like AlterEgo can bring convenience to our lives, we should stop and ask ourselves how much intrusiveness we’re willing to allow in exchange for just that—convenience, as opposed to need. Do I need to have electrodes read my thoughts while I’m, say, grocery shopping in order to get the best deals, or save the most time? Or can I just read price tags and walk a little faster?

When discussing the usefulness of the technology, Pattie Maes, a professor of media arts and sciences at MIT and Kapur’s thesis advisor, mentioned the inconvenience of having to take out your phone and look something up during a conversation. “My students and I have been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present,” she said.

Thad Starner is a professor at Georgia Tech’s College of Computing. He wasn't involved in AlterEgo’s creation, but he's done a lot of work in wearable tech and was closely involved with Google Glass. Starner had some ideas about more utilitarian applications for AlterEgo, pointing out that in high-noise environments, such as on an airport’s tarmac, on the flight deck of an aircraft carrier, or in power plants or printing presses, the system would “be great to communicate with voice in an environment where you normally wouldn’t be able to.”

Starner added, “This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.” He also mentioned the tech would be useful for special operations and the disabled.

Gearing research for voiceless interfaces like AlterEgo towards these practical purposes would likely up support for the tech, while simultaneously taming fears of Orwellian mind-reading and invasions of mental privacy. It’s a conversation that will get louder—inside engineers’ heads and out—as progress in the field advances.

Image Credit: Lorrie Lejeune / MIT

This article originally appeared on Singularity Hub, a publication of Singularity University.

[post_title] => MIT's new voiceless interface can read the words in your head [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => alter-ego [to_ping] => [pinged] => [post_modified] => 2018-12-10 14:18:27 [post_modified_gmt] => 2018-12-10 13:18:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=91451 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[comment_count] => 0 [current_comment] => -1 [found_posts] => 240 [max_num_pages] => 24 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => 1 [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => 1 [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 4649766938e61cc878d7c61e47b323b0 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed )[compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ))
load more