46 results for “Artificial Intelligence”

Five ways AI could make your car as smart as a human passenger

Max Eiza
January 6th 2020

Driving long distances without a passenger can be lonely. If you’ve ever done it, you might have wished for a companion to talk to – someone emotionally intelligent who can understand you and help you on the road. The disembodied voice of SatNav helps to fill the monotonous silence, but it can’t hold a conversation or keep you safe.

Research on driverless cars is well underway, but less is heard about the work being done to make cars a smart …

It could be time to start thinking about a cybernetic Bill of Rights

Mike Ryder
January 6th 2020

Like it or loathe it, the robot revolution is now well underway and the futures described by writers such as Isaac Asimov, Frederik Pohl and Philip K. Dick are fast turning from science fiction into science fact. But should robots have rights? And will humanity ever reach a point where human and machine are treated the same?

At the heart of the debate is that most fundamental question: what does it mean to be human? Intuitively, we all think we …

Explore your relationship with AI in this exhibition

NextNature.net
December 12th 2019

What makes us human? And why do we sometimes fear artificial intelligence? And what about technological singularity - the moment in time when artificial intelligence outperforms human intelligence? The increasing yet often invisible implementation of AI in our daily life (think voice assistant and deep-learning algorithms) causes more questions than answers. Should we be defensive or welcome this new technology as part of our human evolution?

The exhibition AI: More than Human (now at Forum in Groningen—previously at the Barbican …

Truly smart homes could help dementia patients live independently

Dorothy Monekosso
October 28th 2019

You might already have what’s often called a “smart home”, with your lights or music connected to voice-controlled technology such as Alexa or Siri. But when researchers talk about smart homes, we usually mean technologies that use artificial intelligence to learn your habits and automatically adjust your home in response to them. Perhaps the most obvious example of this are thermostats that learn when you are likely to be home and what temperature you prefer, and adjust themselves accordingly without …

AI creates images of delicious food that doesn’t exist

Tristan Greene
January 13th 2019

A team of researchers from Tel-Aviv University developed a neural network capable of reading a recipe and generating an image of what the finished, cooked product would look like. As if DeepFakes weren’t bad enough, now we can’t be sure the delicious food we see online is real.

The Tel-Aviv team, consisting of researchers Ori Bar El, Ori Licht, and Netanel Yosephian created their AI using a modified version of a generative adversarial network (GAN) called StackGAN V2 and 52K …

Future AI may hallucinate and get depressed — just like the rest of us

Tristan Greene
April 23rd 2018

Scientists believe the introduction of a hormone-like system, such as the one found in the human brain, could give AI the ability to reason and make decisions like people do. Recent research indicates human emotion, to a certain extent, is the byproduct of learning. And that means machines may have to risk depression or worse if they ever want to think or feel.…

An AI Is Writing the Next “Game of Thrones”

Jack Caulfield
November 17th 2017
One fan has become so impatient for the conclusion to "Game of Thrones", he's programmed an AI to write it for him. Move over, George R.R. Martin!

How to Fool a Neural Network

Jack Caulfield
November 16th 2017
A neural network helps computers with image recognition. It is usually tough to fool. But one group of researchers has found a way to reliably trick it.

What You Should Know About Artificial Intelligence Changing Jobs

Megan Ray Nichols
November 2nd 2017
It should come as no surprise that artificial intelligence naturally extends into the way we work. Let's look at how AI changes the way we relate to work.

AI Draws New Worlds from Its Artificial Memory

Siebren de Vos
September 15th 2017
An AI draws streets and spaces stitching together its artificial memories of real places.
WP_Query Object ( [query] => Array ( [tag] => artificial-intelligence [post_type] => post [post_status] => publish [orderby] => date [order] => DESC [category__not_in] => Array ( [0] => 1 )[numberposts] => 10 [suppress_filters] => )[query_vars] => Array ( [tag] => artificial-intelligence [post_type] => post [post_status] => publish [orderby] => date [order] => DESC [category__not_in] => Array ( [0] => 1 )[numberposts] => 10 [suppress_filters] => [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [cat] => [tag_id] => 307 [author] => [author_name] => [feed] => [tb] => [paged] => 0 [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( )[category__and] => Array ( )[post__in] => Array ( )[post__not_in] => Array ( )[post_name__in] => Array ( )[tag__in] => Array ( )[tag__not_in] => Array ( )[tag__and] => Array ( )[tag_slug__in] => Array ( [0] => artificial-intelligence )[tag_slug__and] => Array ( )[post_parent__in] => Array ( )[post_parent__not_in] => Array ( )[author__in] => Array ( )[author__not_in] => Array ( )[ignore_sticky_posts] => [cache_results] => 1 [update_post_term_cache] => 1 [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 10 [nopaging] => [comments_per_page] => 50 [no_found_rows] => )[tax_query] => WP_Tax_Query Object ( [queries] => Array ( [0] => Array ( [taxonomy] => category [terms] => Array ( [0] => 1 )[field] => term_id [operator] => NOT IN [include_children] => )[1] => Array ( [taxonomy] => post_tag [terms] => Array ( [0] => artificial-intelligence )[field] => slug [operator] => IN [include_children] => 1 ))[relation] => AND [table_aliases:protected] => Array ( [0] => wp_term_relationships )[queried_terms] => Array ( [post_tag] => Array ( [terms] => Array ( [0] => artificial-intelligence )[field] => slug ))[primary_table] => wp_posts [primary_id_column] => ID )[meta_query] => WP_Meta_Query Object ( [queries] => Array ( )[relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( )[clauses:protected] => Array ( )[has_or_relation:protected] => )[date_query] => [queried_object] => WP_Term Object ( [term_id] => 307 [name] => Artificial Intelligence [slug] => artificial-intelligence [term_group] => 0 [term_taxonomy_id] => 311 [taxonomy] => post_tag [description] => [parent] => 0 [count] => 46 [filter] => raw [term_order] => 0 )[queried_object_id] => 307 [request] => SELECT SQL_CALC_FOUND_ROWS wp_posts.ID FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) WHERE 1=1 AND ( wp_posts.ID NOT IN ( SELECT object_id FROM wp_term_relationships WHERE term_taxonomy_id IN (1) ) AND wp_term_relationships.term_taxonomy_id IN (311) ) AND wp_posts.post_type = 'post' AND ((wp_posts.post_status = 'publish')) GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10 [posts] => Array ( [0] => WP_Post Object ( [ID] => 126405 [post_author] => 2320 [post_date] => 2020-01-06 14:59:04 [post_date_gmt] => 2020-01-06 13:59:04 [post_content] =>

Driving long distances without a passenger can be lonely. If you’ve ever done it, you might have wished for a companion to talk to – someone emotionally intelligent who can understand you and help you on the road. The disembodied voice of SatNav helps to fill the monotonous silence, but it can’t hold a conversation or keep you safe.

Research on driverless cars is well underway, but less is heard about the work being done to make cars a smart companion for drivers. In the future, the cars still driven by humans are likely to become as sensitive and attentive to their driver’s needs as another person. Sound far-fetched? It’s closer than you might think.

1. Ask your car questions

We’re already familiar with AI in our homes and mobile phones. Siri and Alexa answer questions and find relevant search items from around the web on demand. The same will be possible in cars within the near future. Mercedes are integrating Siri into their new A-class car. The technology can recognise the driver’s voice and their way of speaking – rather than just following a basic set of commands, the AI could interpret meaning from conversation in the same way another person could.

2. From the screen to your drive

Those with longer memories may remember a talking car that was a regular on TV. Knight Rider and its super intelligent KITT was a self-aware car that was fiercely loyal to Michael, the driver. Though KITT’s mounted flame thrower and bomb detector might not make it into commercial vehicles, drivers could talk to their cars through a smart band on their wrists. The technology is being developed to allow people to start their car before they reach it, to warm the seats, to set the destination on the navigation system, flash the lights, lock the doors and sound the horn – all from a distance with voice command.

3. Big Motor is watching you

A driver alert system already exists that, through a series of audible and vibrating gestures, tries to keep the driver awake or warn them against sudden lane departure. By 2021 though, there are plans to install in-car cameras to monitor a driver’s behaviour.

If the driver looked away from the road for a period of time, or appeared drunk or sleepy, the car would take action. This might start with slowing down and alerting a call centre for someone to check on the driver, but if the driver didn’t respond, the car could take control, slow down and park in a safe place. The potential to improve road safety is promising, but there are credible concerns for what in-car cameras could mean for individual privacy.

4. A cure for road rage

Increasingly intelligent and perceptive cars won’t stop at visual cues. An AI assistant has been developed which can pick up on the driver’s mood and well-being by detecting their heart rate, eye movements, facial expressions and the tone of their voice. It’s suggested the car would learn the driver’s habits and interact with them by, for example, playing the driver’s favourite music to calm them down. It can also suggest some nice places to go – perhaps a nearby café or park – where the driver could stop to improve their state of mind.

5. A butler on the road

As technology is developed to monitor the mood of drivers, the next step may be cars which can act to improve them. Autonomous vehicles which can take over driving when drivers are stressed could change the windscreen display to show photographs or peaceful scenes. Smart glass windscreens could even black out the surroundings entirely to create a tranquil space – known tentatively in ongoing research as “cocoon mode” – where the interior is invisible from outside and the occupants can rest while the car drives. Cars might even dispense snacks and drinks on demand from refrigerated cartridges, using technology that’s under development but not scheduled to make its debut until 2035.

Whether for good or ill, cars are likely to change beyond recognition in the near future. It may no longer be ridiculous to think that the wildest science fiction dreams could be driving us to work in the not so distant future.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

[post_title] => Five ways AI could make your car as smart as a human passenger [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => five-ways-ai-could-make-your-car-as-smart-as-a-human-passenger [to_ping] => [pinged] => [post_modified] => 2020-01-06 14:59:04 [post_modified_gmt] => 2020-01-06 13:59:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=126405 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[1] => WP_Post Object ( [ID] => 126400 [post_author] => 2319 [post_date] => 2020-01-06 14:53:14 [post_date_gmt] => 2020-01-06 13:53:14 [post_content] =>

Like it or loathe it, the robot revolution is now well underway and the futures described by writers such as Isaac Asimov, Frederik Pohl and Philip K. Dick are fast turning from science fiction into science fact. But should robots have rights? And will humanity ever reach a point where human and machine are treated the same?

At the heart of the debate is that most fundamental question: what does it mean to be human? Intuitively, we all think we know what this means – it almost goes without saying. And yet, as a society, we regularly dehumanise others, and cast them as animal or less than human – what philosopher Giorgio Agamben describes as “bare life”.

Take the homeless for example. People who the authorities treat much like animals, or less than animals (like pests) who need to be guarded against with anti-homeless spikes and benches designed to prevent sleep. A similar process takes places within a military setting, where enemies are cast as less than human to make them easier to fight and easier to kill.

Humans also do this to other “outsiders” such as immigrants and refugees. While many people may find this process disturbing, these artificial distinctions between insider and outsider reveal a key element in the operation of power. This is because our very identities are fundamentally built on assumptions about who we are and what it means to be included in the category of “human”. Without these wholly arbitrary distinctions, we risk exposing the fact that we’re all a lot more like animals than we like to admit.

Being human

Of course, things get a whole lot more complicated when you add robots into the mix. Part of the problem is that we find it hard to decide what we mean by “thought” and “consciousness” and even what we mean by “life” itself. As it stands, the human race doesn’t have a strict scientific definition on when life begins and ends.

Similarly, we don’t have a clear definition on what we mean by intelligent thought and how and why people think and behave in different ways. If intelligent thought is such an important part of being human (as some would believe), then what about other intelligent creatures such as ravens and dolphins? What about biological humans with below average intelligence?

These questions cut to the heart of the rights debate and reveal just how precarious our understanding of the human really is. Up until now, these debates have solely been the preserve of science fiction, with the likes of Flowers for Algernon and Do Androids Dream of Electric Sheep? exposing just how easy it is to blur the line between the human and non-human other. But with the rise of robot intelligence these questions become more pertinent than ever, as now we must also consider the thinking machine.

Machines and the rule of law

But even assuming that robots were one day to be considered “alive” and sufficiently intelligent to be thought of in the same way as human beings, then the next question is how might we incorporate them into society and how we might hold them to account when things go wrong?

Traditionally, we tend to think about rights alongside responsibilities. This comes as part of something known as social contract theory, which is often associated with political philosopher Thomas Hobbes. In a modern context, rights and responsibilities go hand-in-hand with a system of justice that allows us to uphold these rights and enforce the rule of law. But these principles simply cannot be applied to a machine. This is because our human system of justice is based on a concept of what it means to be human and what it means to be alive.

So, if you break the law, you potentially forfeit some part of your life through incarceration or (in some nations) even death. However, machines cannot know mortal existence in the same way humans do. They don’t even experience time in the same way as humans. As such, it doesn’t matter how long a prison sentence is, as a machine could simply switch itself off and remain essentially unchanged.

For now at least, there’s certainly no sign of robots gaining the same rights as human beings and we’re certainly a long way off from machines thinking in a way that might be described as “conscious thought”. Given that we still haven’t quite come to terms with the rights of intelligent creatures such as ravens, dolphins and chimpanzees, the prospect of robot rights would seem a very long way off.

The question then really, is not so much whether robots should have rights, but whether we should distinguish human rights from other forms of life such as animal and machine. It may be that we start to think about a cybernetic Bill of Rights that embraces all thinking beings and recognises the blurred boundaries between human, animal and machine.

Whatever the case, we certainly need to move away from the distinctly problematic notion that we humans are in some way superior to every other form of life on this planet. Such insular thinking has already contributed to the global climate crisis and continues to create tension between different social, religious and ethnic groups. Until we come to terms with what it means to be human, and our place in this world, then the problems will persist. And all the while, the machines will continue to gain intelligence.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

[post_title] => It could be time to start thinking about a cybernetic Bill of Rights [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => cybernetic-bill-of-rights [to_ping] => [pinged] => [post_modified] => 2020-01-06 14:53:15 [post_modified_gmt] => 2020-01-06 13:53:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=126400 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[2] => WP_Post Object ( [ID] => 111455 [post_author] => 367 [post_date] => 2019-12-12 10:47:00 [post_date_gmt] => 2019-12-12 09:47:00 [post_content] =>

What makes us human? And why do we sometimes fear artificial intelligence? And what about technological singularity - the moment in time when artificial intelligence outperforms human intelligence? The increasing yet often invisible implementation of AI in our daily life (think voice assistant and deep-learning algorithms) causes more questions than answers. Should we be defensive or welcome this new technology as part of our human evolution?

The exhibition AI: More than Human (now at Forum in Groningen—previously at the Barbican in London) invites you to explore your relationship with artificial intelligence.

Curators Suzanne Livingston and Maholo Uchida have asked artists, scientists and researchers to demonstrate AI’s potential to revolutionize our lives. Experience the capabilities of AI in the form of cutting-edge research projects by DeepMind, Massachusetts Institute of Technology (MIT) and Neri Oxman; and interact directly with exhibits and installations to experience the possibilities first-hand.

Take your chance and dive into this immersive installation What a Loving and Beautiful World by artist collective teamLab. The visuals consist out of Chinese characters and natural phenomena triggered by interaction. When a visitor touches a character, the world contained inside that character unfolds on the walls.

AI, Ain’t I a Woman? is an exploration of AI from a political perspective. Joy Buolamwini is a poet of code who uses art and research to illuminate the social implications of artificial intelligence. In this case, she lays bare the racial bias of facial recognition.

Inspired by the Dutch 'tulip-mania' in the 1630s, Anna Ridler draws parallels between tulips and the current mania around cryptocurrencies. Created by an AI,  the film shows blooming tulips that are controlled by the bitcoin price. changing over time to show how the market fluctuates. The project echoes 17th century Dutch still life flowers paintings, which despite their supposed realism, are imagined because the flowers in them could never bloom at the same time. Does cryptocurrency provide us with a similar imagined reality?

Visit the newly opened Forum in Groningen to see these projects and much more! Expect your preconceptions to be challenged and discover how this technology impacts our human essence from historical, scientific, social and creative perspectives.

What? A travelling exhibition to explore our relationship with AI
When?
Now, until 30 April 2020
Where?
Forum, Groningen

[post_title] => Explore your relationship with AI in this exhibition [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => ai-more-than-human [to_ping] => [pinged] => [post_modified] => 2019-12-12 16:23:02 [post_modified_gmt] => 2019-12-12 15:23:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=111455 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[3] => WP_Post Object ( [ID] => 120970 [post_author] => 2214 [post_date] => 2019-10-28 16:30:46 [post_date_gmt] => 2019-10-28 15:30:46 [post_content] =>

You might already have what’s often called a “smart home”, with your lights or music connected to voice-controlled technology such as Alexa or Siri. But when researchers talk about smart homes, we usually mean technologies that use artificial intelligence to learn your habits and automatically adjust your home in response to them. Perhaps the most obvious example of this are thermostats that learn when you are likely to be home and what temperature you prefer, and adjust themselves accordingly without you needing to change the settings.

My colleagues and I are interested in how this kind of true smart home technology could help people with dementia. We hope it could learn to recognise the different domestic activities a dementia sufferer carries out throughout the day and help them with each one. This could even lead up to the introduction of household robots to automatically assist with chores.

The growing number of people with dementia is encouraging care providers to look to technology as a way of supporting human carers and improving patients’ quality of life. In particular, we want to use technology to help people with dementia live more independently for as long as possible.

Dementia affects people’s cognitive abilities (things like perception, learning, memory and problem-solving skills). There are many ways that smart home technology can help with this. It can improve safety by automatically closing doors if they are left open or turning off cookers if they are left unattended. Bed and chair sensors or wearable devices can detect how well someone is sleeping or if they have been inactive for an unusual amount of time.

Lights, TVs and phones can be controlled by voice-activated technology or a pictorial interface for people with memory problems. Appliances such as kettles, fridges and washing machines can be controlled remotely.

People with dementia can also become disoriented, wander and get lost. Sophisticated monitoring systems using radiowaves inside and GPS outside can track people’s movements and raise an alert if they travel outside a certain area.

All of the data from these devices could be fed in to complex artificial intelligence that would automatically learn the typical things people do in the house. This is the classic AI problem of pattern matching (looking for and learning patterns from lots of data). To start with, the computer would build a coarse model of the inhabitants’ daily routines and would then be able to detect when something unusual is happening, such as not getting up or eating at the usual time.

A finer model could then represent the steps in a particular activity such as washing hands or making a cup of tea. Monitoring what the person is doing step by step means that, if they forget halfway through, the system can remind them and help them continue.

The more general model of the daily routine could use innocuous sensors such as those in beds or doors. But for the software to have a more detailed understanding of what is happening in the house you would need cameras and video processing that would be able to detect specific actions such as someone falling over. The downside to these improved models is a loss of privacy.

Future smart homes could include robot carers. Via Miriam Doerr Martin Frommherz/Shutterstock

The smart home of the future could also come equipped with a humanoid robot to help with chores. Research in this area is moving at a steady, albeit slow, pace, with Japan taking the lead with nurse robots.

The biggest challenge with robots in the home or care home is that of operating in an unstructured environment. Factory robots can operate with speed and precision because they perform specific, pre-programmed tasks in a purpose-designed space. But the average home is less structured and changes frequently as furniture, objects and people move around. This is a key problem which researchers are investigating using artificial intelligence techniques, such as capturing data from images (computer vision).

Robots don’t just have the potential to help with physical labour either. While most smart home technologies focus on mobility, strength and other physical characteristics, emotional well-being is equally important. A good example is the PARO robot, which looks like a cute toy seal but is designed to provide therapeutic emotional support and comfort.

Understanding interaction

The real smartness in all this technology comes from automatically discovering how the person interacts with their environment in order to provide support at the right moment. If we just built technology to do everything for people then it would actually reduced their independence.

For example, emotion-recognition software could judge someone’s feelings from their expression could adjust the house or suggest activities in response, for example by changing the lighting or encouraging the patient to take some exercise. As the inhabitant’s physical and cognitive decline increases, the smart house would adapt to provide more appropriate support.

There are still many challenges to overcome, from improving the reliability and robustness of sensors, to preventing annoying or disturbing alarms, to making sure the technology is safe from cybercriminals. And for all the technology, there will always be a need for a human in the loop. The technology is intended to complement human carers and must be adapted to individual users. But the potential is there for genuine smart homes to help people with dementia live richer, fuller and hopefully longer lives.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

[post_title] => Truly smart homes could help dementia patients live independently [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => smart-homes-help-dementia-patients [to_ping] => [pinged] => [post_modified] => 2019-10-29 14:27:21 [post_modified_gmt] => 2019-10-29 13:27:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=120970 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[4] => WP_Post Object ( [ID] => 107387 [post_author] => 1613 [post_date] => 2019-01-13 14:47:02 [post_date_gmt] => 2019-01-13 13:47:02 [post_content] =>

A team of researchers from Tel-Aviv University developed a neural network capable of reading a recipe and generating an image of what the finished, cooked product would look like. As if DeepFakes weren’t bad enough, now we can’t be sure the delicious food we see online is real.

The Tel-Aviv team, consisting of researchers Ori Bar El, Ori Licht, and Netanel Yosephian created their AI using a modified version of a generative adversarial network (GAN) called StackGAN V2 and 52K image/recipe combinations from the gigantic recipe1M dataset.

Basically, the team developed an AI that can take almost any list of ingredients and instructions, and figure out what the finished food product looks like.

Researcher Ori Bar El told The Next Web:

"[It] all started when I asked my grandmother for a recipe of her legendary fish cutlets with tomato sauce. Due to her advanced age she didn’t remember the exact recipe. So, I was wondering if I can build a system that given a food image, can output the recipe. After thinking about this task for a while I concluded that it is too hard for a system to get an exact recipe with real quantities and with “hidden” ingredients such as salt, pepper, butter, flour etc.

Then, I wondered if I can do the opposite, instead. Namely, generating food images based on the recipes.  We believe that this task is very challenging to be accomplished by humans, all the more so for computers. Since most of the current AI systems try replace human experts in tasks that are easy for humans, we thought that it would be interesting to solve a kind of task that is even beyond humans’ ability. As you can see, it can be done in a certain extent of success."

The researchers also acknowledge, in their white paper, that the system isn’t perfect quite yet:

"It is worth mentioning that the quality of the images in the recipe1M dataset is low in comparison to the images in CUB and Oxford102 datasets. This is reflected by lots of blurred images with bad lighting conditions, ”porridge-like images” and the fact that the images are not square shaped (which makes it difficult to train the models). This fact might give an explanation to the fact that both models succeeded in generating ”porridge-like” food images (e.g. pasta, rice, soups, salad) but struggles to generate food images that have a distinctive shape (e.g. hamburger, chicken, drinks)."

This is the only AI of its kind that we know of, so don’t expect this to be an app on your phone anytime soon. But, the writing is on the wall. And, if it’s a recipe, the Tel-Aviv team’s AI can turn it into an image that looks good enough that, according to the research paper, humans sometimes prefer it over a photo of the real thing.

What do you think?

The team intends to continue developing the system, hopefully extending into domains beyond food. Ori Bar El told us:

We plan to extend the work by training our system on the rest of the recipes (we have about 350k more images), but the problem is that the current dataset is of low quality. We have not found any other available dataset suitable for our needs, but we might build a dataset on our own that contains children’s books text and corresponding images.

These talented researchers may have damned foodies on Instagram to a world where we can’t quite be sure whether what we’re drooling over is real, or some robot’s vision of a souffle`.

It’s probably a good time for us all to go out into the real world and stick our faces in some actual food. You know, the kind created by scientists and prepared by robots.

This story is republished from The Next Web under a Creative Commons license. Read the original piece here.

[post_title] => AI creates images of delicious food that doesn’t exist [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => ai-creates-images-of-delicious-food-that-doesnt-exist [to_ping] => [pinged] => [post_modified] => 2019-01-13 14:47:16 [post_modified_gmt] => 2019-01-13 13:47:16 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=107387 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[5] => WP_Post Object ( [ID] => 81412 [post_author] => 1613 [post_date] => 2018-04-23 08:27:40 [post_date_gmt] => 2018-04-23 07:27:40 [post_content] => Scientists believe the introduction of a hormone-like system, such as the one found in the human brain, could give AI the ability to reason and make decisions like people do. Recent research indicates human emotion, to a certain extent, is the byproduct of learning. And that means machines may have to risk depression or worse if they ever want to think or feel.Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown in Lisbon, speaking at the Canonical Computation in Brains and Machines symposium, discussed the implications of recent experiments to discover the effects serotonin has on decision making.[youtube]https://www.youtube.com/watch?v=dMHhcf3jAPE[/youtube]According to Mainen and his team, serotonin may not be related to ‘mood’ or emotional states such as happiness, but instead is a neuro-modulator designed to update and change learning parameters in the brain.He even opines that such mechanisms may be necessary for machine learning, despite some potentially disturbing side effects, namely the ones people suffer from. In an interview with Science, he said:"Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong."The research is still fairly nascent and requires further testing, but experiments conducted on mice indicate serotonin plays a large role in what ‘data’ the brain chooses to keep and how much weight it’s given. In essence, the results of the research show serotonin and dopamine may be intrinsic to the facilitation of a developing intelligence.In order to determine how serotonin affects decision making, scientists gave mice a choice between two paths, left or right. At the end of one path they placed a reward in the form of water. Once the mice were familiar with the location of the reward the team was able to trigger a serotonin response in the rodents by moving the water and surprising them. Whether the mice found the water wasn’t much of a factor in whether serotonin levels spiked or not, but whether it was surprised was.When Mainen’s team conducted further experiments, including manually activating serotonin production in an animal running around in a field, they found subjects would slow down and consider the situation almost immediately after a spike. This, according to Mainen, indicates serotonin causes a learning system to place less value on things that just happened (the previous input), instead working to change previous assumptions. This is something that could greatly benefit AI.The researchers also injected the same mice with a serotonin inhibitor and found that learning became delayed. With the hormone, or neurotransmitter as it’s often referred to as,  it only took a couple of days for their brains to normalize new data. That time was increased when the mice weren’t able to naturally release serotonin. And that means serotonin (and its effects) may be crucial for human learning.Whether or not this is useful to machine learning developers depends on how closely they intend to mimic the human brain. Some scientists argue that chemical imbalances in an organic brain are anomalous, but Mainen’s research seems to indicate otherwise. His team hypothesizes that hyper-modulators, similar to serotonin, could be used as ‘shortcuts’ to keep autonomous systems from becoming stuck in outdated models.Designing robots to deal with a static environment using supervised learning likely won’t prepare machines to deal with the constantly changing real world. But giving them emotions and the capacity to hallucinate things that aren’t real doesn’t seem like a good idea either.Nobody has time to talk their Tesla out of the garage before work because it thinks the Ford Focus next door is secretly plotting against it.This story is published in partnership with The Next Web. Read the original piece here.Cover image via YouTube: Sad Robot Song. [post_title] => Future AI may hallucinate and get depressed — just like the rest of us [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => future-ai-may-hallucinate [to_ping] => [pinged] => [post_modified] => 2018-04-23 08:28:29 [post_modified_gmt] => 2018-04-23 07:28:29 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=81412 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[6] => WP_Post Object ( [ID] => 78457 [post_author] => 1425 [post_date] => 2017-11-17 10:00:54 [post_date_gmt] => 2017-11-17 08:00:54 [post_content] => If you are a fan of the Game of Thrones series, you’re probably aware that the promised next instalment, The Winds of Winter, is taking a long time to write. Now one fan has become so impatient for the conclusion to George R.R. Martin’s epic, he has programmed an AI to write it for him. Move over, George!

A Storytelling AI

Martin's fantasy epic, and its TV adaptation, have gained a huge following over the years. Fans have been busy theorizing what will happen next in the notoriously unpredictable series for a long time, but none have gone as far as Zack Thoutt, the software engineer behind the procedurally generated new instalment.Zack used a recurrent neural network, a sort of artificial intelligence which is capable of learning and evolving based on the input it receives. He fed the network the text of the five current books in Martin's series, so that it could learn to imitate the author's style and memorize the names of characters and locations. Having caught up on events so far, the algorithm is now tasked with generating prose in the style of the existing novels, as a continuation of the story.

So What Happens Next?

So far the network has generated five chapters, and they certainly provide some twists. Jaime kills Cersei, Varys poisons Daenerys and Jon Snow rides a dragon. The AI has even invented new characters, like the mysterious Greenbeard. The computer-generated prose, however, leaves something to be desired. Though the AI seems to have got the hang of imitating Martin's style, it has trouble generating content that makes sense. A representative quote from chapter two reads:"He came an hour ago at sunrise, only now the stones seemed to shimmer in a blaze of fear. Each one was the lamb, and he had broken more shift, as a shadow snow brought forth from whitetree, a blade of five different. Every man of the fiery men in towers, the screaming of one had flies in those a castle, half on old wyk toward watch from farther behind, following the tall walls of an oldtown storm".So far this new digital instalment reads more like Finnegans Wake than anything else. Though it's impressive that the network can generate such a complex text, it doesn't seem like Martin has to worry about being replaced by a robot any time soon.Source: World Economic Forum. Image: Wikimedia [post_title] => An AI Is Writing the Next "Game of Thrones" [post_excerpt] => One fan has become so impatient for the conclusion to "Game of Thrones", he's programmed an AI to write it for him. Move over, George R.R. Martin! [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => ai-writes-next-game-of-thrones [to_ping] => [pinged] => [post_modified] => 2017-11-16 14:10:33 [post_modified_gmt] => 2017-11-16 12:10:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=78457/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[7] => WP_Post Object ( [ID] => 78405 [post_author] => 1425 [post_date] => 2017-11-16 13:57:11 [post_date_gmt] => 2017-11-16 11:57:11 [post_content] => Computers are smart and, thanks to artificial neural networks, they are getting smarter. These networks, modelled after real neurological systems, allow computers to complete complex tasks such as image recognition. Until now, they have been impressively tough to fool. But one group of researchers claims to have found a way to reliably trick these networks into getting it wrong.

Optical Illusions for Neural Networks?

Pictures designed to trick image recognition networks are called "adversarial". That is, they are designed to look like one thing to us, and - through subtle modifications - to look like something entirely different to the network, creating a conflict. Take the example below:We can clearly see that this is a cat. But by carefully "perturbing" the image, the team were able to convince Google's InceptionV3 image classifier that it was in fact guacamole. So far so good, but the trick only works from this specific angle. Rotate the image even slightly and the network correctly recognizes it as a cat.The researchers, a team of MIT students called LabSix, wanted a trick that would work more reliably.

Adversarial Objects

They intended to show that not only still images but also real-world objects could be made "adversarial" to the network's pattern recognition. For this, they designed an algorithm to create 2D printouts and 3D-printed models with qualities that produce "targeted misclassification" whatever angle they are viewed from, even when blurred due to rapid camera movement.In other words, they were able to make not just still images but real-world objects which could reliably fool neural networks. To test their creation, the team produced a turtle which the network thought was a rifle, and a baseball it misidentified as an espresso. They even tested the objects against different backgrounds. Putting the "rifle" against a watery backdrop didn't spoil the trick; nor did placing the "espresso" in a baseball mitt. According to the paper, the trick seems to work regardless of visual context.In a world where we increasingly rely on neural networks for intricate and involved tasks, should we worry that they can be fooled like this? Aside from running Google's image search, the LabSix group point out that these networks are used in "high risk, real world systems". As humans, we are susceptible to certain optical illusions. Perhaps we shouldn't be surprised that neural networks are not so invulnerable either. [post_title] => How to Fool a Neural Network [post_excerpt] => A neural network helps computers with image recognition. It is usually tough to fool. But one group of researchers has found a way to reliably trick it. [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => fool-neural-network [to_ping] => [pinged] => [post_modified] => 2017-11-20 10:00:19 [post_modified_gmt] => 2017-11-20 08:00:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=78405/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[8] => WP_Post Object ( [ID] => 78141 [post_author] => 872 [post_date] => 2017-11-02 15:49:16 [post_date_gmt] => 2017-11-02 13:49:16 [post_content] => Artificial intelligence, or AI, has changed the way we shop online by giving suggestions for things we may want to buy based on past purchases. It has also altered how we send and receive emails, since many platforms automatically filter messages by importance or give suggested responses due to our habits. It should come as no surprise, then, that AI naturally extends into the way we work. Let's look at how artificial intelligence is influencing the way we relate to work.

AI has changed how we search for jobs

Google is the first place many people go when searching for jobs. Now, thanks to a recent update that utilizes AI technology, locating relevant employment postings through the search engine is even simpler. Using a service called Google for Jobs, you can enter the preferred position and location on Google and receive a list of possibilities without having to sign up at the job websites that are advertising the openings.In this case, AI is simplifying the often-arduous job search process and helping people find relevant job openings in a more centralized manner.

AI changes how we work and the tasks we carry out

There are also many ways AI impacts what we do while at work. AI has already proven it is excellent for repetitive tasks. Some insurance companies use it to streamline the claims process and deploy chatbots to ease the workload of human customer service agents, for example. One argument against AI asserts that the technology lacks empathy. That means it won't ever be able to perform the same way as a doctor, trying to calm a frightened patient or an irate customer that has experienced too many faults with a product or service and has run out of patience.But, it is certain that future workforce must learn how AI can supplement skills in meaningful ways to succeed in the workplace of the future. The technology exists, and it is being used with increasing frequency to help employees complete repetitive tasks and allow them to focus on other activities instead.

Today’s technologies create opportunities for those with the necessary skills

Experts also assert that people with skills related to AI and associated technologies, such as robotics, will enjoy strong employment outlooks. They'll command more job flexibility, better salaries and other perks that are not available to people who do not have the same knowledge and have not been able or willing to adapt.New technologies require people to learn new things. It's smart for workers to increase their knowledge base on their own. However, employers can also help fill the gap by ensuring their workers do not get left behind.Some notable brands, such as eBay and Electronic Arts, are also turning to AI to measure employee performance. They tap into technology and make predictions about things ranging from absenteeism to whether a commute time will adversely affect performance. Theoretically, employers could also use the resources at their disposal to see where skills growth needs to occur, allowing workers to nimbly change with the times.Artificial intelligence, robotics and similar technologies have become integrated into the way of life for the modern workforce. Employees who do not realize that and improve themselves accordingly may find it difficult to succeed in the workplace. [post_title] => What You Should Know About Artificial Intelligence Changing Jobs [post_excerpt] => It should come as no surprise that artificial intelligence naturally extends into the way we work. Let's look at how AI changes the way we relate to work. [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => artificial-intelligence-changing-jobs [to_ping] => [pinged] => [post_modified] => 2019-04-18 10:31:54 [post_modified_gmt] => 2019-04-18 09:31:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=78141/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[9] => WP_Post Object ( [ID] => 77253 [post_author] => 1431 [post_date] => 2017-09-15 10:00:41 [post_date_gmt] => 2017-09-15 08:00:41 [post_content] => Do you recognize this street? Small chances you do, as this street simply doesn't exist, it was generated and drawn by an AI. Researcher Qifeng Chen from Stanford University, California, employs a supercomputer to create images of streets and spaces from its artificial memory.Using a kindergarten technique, Chen taught the AI to paint by numbers, that is to fill in the shapes with the colours corresponding to the given numbers. A human outlined by hand the objects on existing pictures, so that the AI knew what shape should be coloured as a table, chair, window, lamp and so on. In the picture below you can see how the AI can produce different scenes using this technique.[caption id="attachment_77288" align="aligncenter" width="640"] On the left: input semantic layouts. On the right: synthesized images.[/caption]While the pictures produced by the AI still have a dreamy look, imagine its possibilities within a few years. According to Noah Snavely from the Cornell University in New York this is not a problem at the moment, as people are not expecting photorealism in virtual reality yet. One day, Snavely said, it will become possible for an AI to dream up an entire world by simply describing it to the system. Until then we wonder, do systems dream of electric streets?[youtube]https://www.youtube.com/watch?v=t169yNXX4oU[/youtube]Source: New Scientist [post_title] => AI Draws New Worlds from Its Artificial Memory [post_excerpt] => An AI draws streets and spaces stitching together its artificial memories of real places. [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => ai-draws-new-worlds [to_ping] => [pinged] => [post_modified] => 2017-09-14 11:02:46 [post_modified_gmt] => 2017-09-14 09:02:46 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=77253/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 ))[post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 126405 [post_author] => 2320 [post_date] => 2020-01-06 14:59:04 [post_date_gmt] => 2020-01-06 13:59:04 [post_content] =>

Driving long distances without a passenger can be lonely. If you’ve ever done it, you might have wished for a companion to talk to – someone emotionally intelligent who can understand you and help you on the road. The disembodied voice of SatNav helps to fill the monotonous silence, but it can’t hold a conversation or keep you safe.

Research on driverless cars is well underway, but less is heard about the work being done to make cars a smart companion for drivers. In the future, the cars still driven by humans are likely to become as sensitive and attentive to their driver’s needs as another person. Sound far-fetched? It’s closer than you might think.

1. Ask your car questions

We’re already familiar with AI in our homes and mobile phones. Siri and Alexa answer questions and find relevant search items from around the web on demand. The same will be possible in cars within the near future. Mercedes are integrating Siri into their new A-class car. The technology can recognise the driver’s voice and their way of speaking – rather than just following a basic set of commands, the AI could interpret meaning from conversation in the same way another person could.

2. From the screen to your drive

Those with longer memories may remember a talking car that was a regular on TV. Knight Rider and its super intelligent KITT was a self-aware car that was fiercely loyal to Michael, the driver. Though KITT’s mounted flame thrower and bomb detector might not make it into commercial vehicles, drivers could talk to their cars through a smart band on their wrists. The technology is being developed to allow people to start their car before they reach it, to warm the seats, to set the destination on the navigation system, flash the lights, lock the doors and sound the horn – all from a distance with voice command.

3. Big Motor is watching you

A driver alert system already exists that, through a series of audible and vibrating gestures, tries to keep the driver awake or warn them against sudden lane departure. By 2021 though, there are plans to install in-car cameras to monitor a driver’s behaviour.

If the driver looked away from the road for a period of time, or appeared drunk or sleepy, the car would take action. This might start with slowing down and alerting a call centre for someone to check on the driver, but if the driver didn’t respond, the car could take control, slow down and park in a safe place. The potential to improve road safety is promising, but there are credible concerns for what in-car cameras could mean for individual privacy.

4. A cure for road rage

Increasingly intelligent and perceptive cars won’t stop at visual cues. An AI assistant has been developed which can pick up on the driver’s mood and well-being by detecting their heart rate, eye movements, facial expressions and the tone of their voice. It’s suggested the car would learn the driver’s habits and interact with them by, for example, playing the driver’s favourite music to calm them down. It can also suggest some nice places to go – perhaps a nearby café or park – where the driver could stop to improve their state of mind.

5. A butler on the road

As technology is developed to monitor the mood of drivers, the next step may be cars which can act to improve them. Autonomous vehicles which can take over driving when drivers are stressed could change the windscreen display to show photographs or peaceful scenes. Smart glass windscreens could even black out the surroundings entirely to create a tranquil space – known tentatively in ongoing research as “cocoon mode” – where the interior is invisible from outside and the occupants can rest while the car drives. Cars might even dispense snacks and drinks on demand from refrigerated cartridges, using technology that’s under development but not scheduled to make its debut until 2035.

Whether for good or ill, cars are likely to change beyond recognition in the near future. It may no longer be ridiculous to think that the wildest science fiction dreams could be driving us to work in the not so distant future.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

[post_title] => Five ways AI could make your car as smart as a human passenger [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => five-ways-ai-could-make-your-car-as-smart-as-a-human-passenger [to_ping] => [pinged] => [post_modified] => 2020-01-06 14:59:04 [post_modified_gmt] => 2020-01-06 13:59:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=126405 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[comment_count] => 0 [current_comment] => -1 [found_posts] => 45 [max_num_pages] => 5 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => 1 [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => 1 [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 65c780822728a3a31a7b2b91d15ee1da [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed )[compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ))
load more