164 results for “robotics”

It could be time to start thinking about a cybernetic Bill of Rights

Mike Ryder
January 6th 2020

Like it or loathe it, the robot revolution is now well underway and the futures described by writers such as Isaac Asimov, Frederik Pohl and Philip K. Dick are fast turning from science fiction into science fact. But should robots have rights? And will humanity ever reach a point where human and machine are treated the same?

At the heart of the debate is that most fundamental question: what does it mean to be human? Intuitively, we all think we …

This exhibition investigates how humanity will live tomorrow

NextNature.net
November 28th 2019

Occupying the 52nd floor of Tokyo’s Mori Tower, Mori Art Museum is internationally renowned for its visionary approach and highly original curation of contemporary art. The museum’s latest exhibition, Future and the Arts: AI, Robotics, Cities, Life - How Humanity Will Live Tomorrow, is a comprehensive investigation into the near future, a space in which speculation becomes reality for the duration of your visit.

The exhibition builds a diverse picture of what our world may look like in 20 to …

This exhibition looks at how robots are changing the world we live in

NextNature.net
November 7th 2019

Delve into the science and fiction of robots at V&A Dundee’s latest exhibition: Hello, Robot. Contemplate the existence of robots and how they have both shaped, and been shaped by technology, culture and design. See for yourself how the boundaries between human and machine have become increasingly blurred; understand how we got here, and take note of the increasing power of designers to influence the future of such technologies.

Hello, Robot has over 200 objects on display, and takes visitors …

This brain-controlled exoskeleton allows a paralyzed man to walk again

Freya Hutchings
October 7th 2019

A breakthrough technology that responds to signals from the brain has transformed the life of a paralyzed 28-year-old man called Thibault. Four years after the initial incident that paralysed him, Thibault is walking again. Thanks to a combination of revolutionary technology and immense brain power, he is able to operate a full-body exoskeleton.

“When you are in my position, when you can’t do anything with your body […] I wanted to do something with my brain,” Thibault said.

This is …

The social animals that are inspiring new behaviors for robot swarms

Edmund Hunt
April 11th 2019

From flocks of birds to fish schools in the sea, or towering termite mounds, many social groups in nature exist together to survive and thrive. This cooperative behaviour can be used by engineers as “bio-inspiration” to solve practical human problems, and by computer scientists studying swarm intelligence.

“Swarm robotics” took off in the early 2000s, an early example being the “s-bot” (short for swarm-bot). This is a fully autonomous robot that can perform basic tasks including navigation and the grasping …

Why Next Nature Network is hiring an army of bots

NextNature.net
April 11th 2019

As we showed with HUBOT, we can use new technologies and robotics to make our work more enjoyable, interesting and humane. Aside from our speculative jobs, a lot of robotic companions already exist. Think about the robotic fish that observes marine life, the breathing pillow that helps you sleep and robotic arms that assist the surgeon. So why not join forces at Next Nature Network?

Not working against, but with robots!

Over the past few years, we have been actively …

The religion named Artificial Intelligence

Linda Valenta
April 5th 2019

Would you pray to a Robot deity? A group of Japanese buddhists is already doing so. Meet Mindar, the robot divinity shaped after the buddhist Goddess of Mercy, also known as Kannon. Kyoto based Kodaiji Temple recruited Osaka University’s head of intelligent robotics Hiroshi Ishiguro to design this million dollar robot. Its purpose? To encourage youngsters to engage with buddhism again.

Mindar’s silicone face represents a gender neutrality that aims to move beyond human representation. Supplemented with aluminium body parts, …

Sophia the Robot has a sister: Little Sophia

Ruben Baart
February 7th 2019

Robot Sophia is pretty much the international face of the ‘modern robot’. Sophia is the Audrey Hepburn-inspired humanoid robot who stands out for multiple reasons. She is the first robot granted human citizenship (in Saudi Arabia) and has met up with more politicians and celebrities than one would meet in a lifetime. Sophia is advanced, yet not very portable and expensive.

Now the superstar robot has a little sister: Little Sophia, an educational companion for ages 7-13 getting kids excited …

Watch this lifelike robot fish swim through the ocean

Vanessa Bates Ramirez
January 7th 2019

Earth’s oceans are having a rough go of it these days. On top of being the repository for millions of tons of plastic waste, global warming is affecting the oceans and upsetting marine ecosystems in potentially irreversible ways.…

Your next doctor might just be a robot

Jack Caulfield
December 6th 2018

Conventional wisdom says that you can’t replace the human touch in terms of medical care, but in our rapidly changing technological environment, it appears that this perception might be changing. Meet the robotics innovations around the world that are shaping tomorrow’s hospitals.…

WP_Query Object ( [query] => Array ( [tag] => robotics [post_type] => post [post_status] => publish [orderby] => date [order] => DESC [category__not_in] => Array ( [0] => 1 )[numberposts] => 10 [suppress_filters] => )[query_vars] => Array ( [tag] => robotics [post_type] => post [post_status] => publish [orderby] => date [order] => DESC [category__not_in] => Array ( [0] => 1 )[numberposts] => 10 [suppress_filters] => [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [cat] => [tag_id] => 106 [author] => [author_name] => [feed] => [tb] => [paged] => 0 [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( )[category__and] => Array ( )[post__in] => Array ( )[post__not_in] => Array ( )[post_name__in] => Array ( )[tag__in] => Array ( )[tag__not_in] => Array ( )[tag__and] => Array ( )[tag_slug__in] => Array ( [0] => robotics )[tag_slug__and] => Array ( )[post_parent__in] => Array ( )[post_parent__not_in] => Array ( )[author__in] => Array ( )[author__not_in] => Array ( )[ignore_sticky_posts] => [cache_results] => 1 [update_post_term_cache] => 1 [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 10 [nopaging] => [comments_per_page] => 50 [no_found_rows] => )[tax_query] => WP_Tax_Query Object ( [queries] => Array ( [0] => Array ( [taxonomy] => category [terms] => Array ( [0] => 1 )[field] => term_id [operator] => NOT IN [include_children] => )[1] => Array ( [taxonomy] => post_tag [terms] => Array ( [0] => robotics )[field] => slug [operator] => IN [include_children] => 1 ))[relation] => AND [table_aliases:protected] => Array ( [0] => wp_term_relationships )[queried_terms] => Array ( [post_tag] => Array ( [terms] => Array ( [0] => robotics )[field] => slug ))[primary_table] => wp_posts [primary_id_column] => ID )[meta_query] => WP_Meta_Query Object ( [queries] => Array ( )[relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( )[clauses:protected] => Array ( )[has_or_relation:protected] => )[date_query] => [queried_object] => WP_Term Object ( [term_id] => 106 [name] => robotics [slug] => robotics [term_group] => 0 [term_taxonomy_id] => 109 [taxonomy] => post_tag [description] => [parent] => 0 [count] => 164 [filter] => raw [term_order] => 0 )[queried_object_id] => 106 [request] => SELECT SQL_CALC_FOUND_ROWS wp_posts.ID FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) WHERE 1=1 AND ( wp_posts.ID NOT IN ( SELECT object_id FROM wp_term_relationships WHERE term_taxonomy_id IN (1) ) AND wp_term_relationships.term_taxonomy_id IN (109) ) AND wp_posts.post_type = 'post' AND ((wp_posts.post_status = 'publish')) GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10 [posts] => Array ( [0] => WP_Post Object ( [ID] => 126400 [post_author] => 2319 [post_date] => 2020-01-06 14:53:14 [post_date_gmt] => 2020-01-06 13:53:14 [post_content] =>

Like it or loathe it, the robot revolution is now well underway and the futures described by writers such as Isaac Asimov, Frederik Pohl and Philip K. Dick are fast turning from science fiction into science fact. But should robots have rights? And will humanity ever reach a point where human and machine are treated the same?

At the heart of the debate is that most fundamental question: what does it mean to be human? Intuitively, we all think we know what this means – it almost goes without saying. And yet, as a society, we regularly dehumanise others, and cast them as animal or less than human – what philosopher Giorgio Agamben describes as “bare life”.

Take the homeless for example. People who the authorities treat much like animals, or less than animals (like pests) who need to be guarded against with anti-homeless spikes and benches designed to prevent sleep. A similar process takes places within a military setting, where enemies are cast as less than human to make them easier to fight and easier to kill.

Humans also do this to other “outsiders” such as immigrants and refugees. While many people may find this process disturbing, these artificial distinctions between insider and outsider reveal a key element in the operation of power. This is because our very identities are fundamentally built on assumptions about who we are and what it means to be included in the category of “human”. Without these wholly arbitrary distinctions, we risk exposing the fact that we’re all a lot more like animals than we like to admit.

Being human

Of course, things get a whole lot more complicated when you add robots into the mix. Part of the problem is that we find it hard to decide what we mean by “thought” and “consciousness” and even what we mean by “life” itself. As it stands, the human race doesn’t have a strict scientific definition on when life begins and ends.

Similarly, we don’t have a clear definition on what we mean by intelligent thought and how and why people think and behave in different ways. If intelligent thought is such an important part of being human (as some would believe), then what about other intelligent creatures such as ravens and dolphins? What about biological humans with below average intelligence?

These questions cut to the heart of the rights debate and reveal just how precarious our understanding of the human really is. Up until now, these debates have solely been the preserve of science fiction, with the likes of Flowers for Algernon and Do Androids Dream of Electric Sheep? exposing just how easy it is to blur the line between the human and non-human other. But with the rise of robot intelligence these questions become more pertinent than ever, as now we must also consider the thinking machine.

Machines and the rule of law

But even assuming that robots were one day to be considered “alive” and sufficiently intelligent to be thought of in the same way as human beings, then the next question is how might we incorporate them into society and how we might hold them to account when things go wrong?

Traditionally, we tend to think about rights alongside responsibilities. This comes as part of something known as social contract theory, which is often associated with political philosopher Thomas Hobbes. In a modern context, rights and responsibilities go hand-in-hand with a system of justice that allows us to uphold these rights and enforce the rule of law. But these principles simply cannot be applied to a machine. This is because our human system of justice is based on a concept of what it means to be human and what it means to be alive.

So, if you break the law, you potentially forfeit some part of your life through incarceration or (in some nations) even death. However, machines cannot know mortal existence in the same way humans do. They don’t even experience time in the same way as humans. As such, it doesn’t matter how long a prison sentence is, as a machine could simply switch itself off and remain essentially unchanged.

For now at least, there’s certainly no sign of robots gaining the same rights as human beings and we’re certainly a long way off from machines thinking in a way that might be described as “conscious thought”. Given that we still haven’t quite come to terms with the rights of intelligent creatures such as ravens, dolphins and chimpanzees, the prospect of robot rights would seem a very long way off.

The question then really, is not so much whether robots should have rights, but whether we should distinguish human rights from other forms of life such as animal and machine. It may be that we start to think about a cybernetic Bill of Rights that embraces all thinking beings and recognises the blurred boundaries between human, animal and machine.

Whatever the case, we certainly need to move away from the distinctly problematic notion that we humans are in some way superior to every other form of life on this planet. Such insular thinking has already contributed to the global climate crisis and continues to create tension between different social, religious and ethnic groups. Until we come to terms with what it means to be human, and our place in this world, then the problems will persist. And all the while, the machines will continue to gain intelligence.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

[post_title] => It could be time to start thinking about a cybernetic Bill of Rights [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => cybernetic-bill-of-rights [to_ping] => [pinged] => [post_modified] => 2020-01-06 14:53:15 [post_modified_gmt] => 2020-01-06 13:53:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=126400 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[1] => WP_Post Object ( [ID] => 125669 [post_author] => 367 [post_date] => 2019-11-28 17:32:34 [post_date_gmt] => 2019-11-28 16:32:34 [post_content] =>

Occupying the 52nd floor of Tokyo’s Mori Tower, Mori Art Museum is internationally renowned for its visionary approach and highly original curation of contemporary art. The museum’s latest exhibition, Future and the Arts: AI, Robotics, Cities, Life - How Humanity Will Live Tomorrow, is a comprehensive investigation into the near future, a space in which speculation becomes reality for the duration of your visit.

The exhibition builds a diverse picture of what our world may look like in 20 to 30 years. Depicting a range of works that represent utopia, dystopia and everything in between, the showcase will ask fundamental questions about technology in order to establish what our future ought to be.

Here are three must-see highlights:

The momentous expo includes ecoLogicStudio’s “in-human” garden, H.O.R.T.U.S XL, where visitors can witness a productive meeting of biological autonomy and man-made creation. Within the installation, technology and nature find balance; the sculpture's 3D printed structure optimizes the growth of the algae inoculated into it by humans, and in turn, the algae purifies the air that surrounds it, making this living sculpture receptive to both human and non-human life. The project grapples with how developments in synthetic biology and design give the notion of "living" a new artificiality.

Drawing on developments in human reproductive technologies, Ai Hasegawa (an artist featured in our Reprodutopia project) will show her speculative project, Shared Baby. Hasegawa imagines a scenario in which the DNA of multiple parents can be used for the creation of one child, and additionally, the sex of the parent is irrelevant. The project asks, how will family structures be transformed by such developments, and what will the future of humans look like when individuals share the DNA of multiple people?

Delving into the social implications of robotic-human relationships, Vincent Fournier's photographic series, Man Machine, questions how humans will live alongside increasingly anthropomorphic robots. Fournier's aim was to "create a balance between the spectator and the robot, between a process of identification and distance.” In doing so, his photography addresses how the integration of robots in our daily lives sparks both fascination and fear when it comes to social acceptance of this change. 

These works join 100 projects from an impressive selection of visual artists, architects and designers, including Agi Haines, Daan Roosegaarde, Patricia Piccinini, Neri Oxman, XTU Architects and many more.

Read Benjamin Aldes Wurgaft's essay on judaism in relation to the production of laboratory-grown meat.

Next Nature Network also contributes to the exhibition by presenting six Bistro In Vitro meat dishes. Adding our unique perspective to this impressive future forecast, we provide a tangible narrative through which visitors can contemplate how our current food culture may be transformed by the normalization of in vitro meat. Indeed, before we can decide whether we are willing to consume lab-grown alternatives, we must consider how meat may manifest in our lives, our kitchens and on our plates.

What? A near-future forecast disguised as an exhibition 
Where? Mori Art Museum, Tokyo
When? Now, until 29 March 2020

[post_title] => This exhibition investigates how humanity will live tomorrow [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => how-humanity-will-live-tomorrrow-expo [to_ping] => [pinged] => [post_modified] => 2019-11-29 15:15:08 [post_modified_gmt] => 2019-11-29 14:15:08 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=125669 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[2] => WP_Post Object ( [ID] => 125626 [post_author] => 367 [post_date] => 2019-11-07 15:13:57 [post_date_gmt] => 2019-11-07 14:13:57 [post_content] =>

Delve into the science and fiction of robots at V&A Dundee’s latest exhibition: Hello, Robot. Contemplate the existence of robots and how they have both shaped, and been shaped by technology, culture and design. See for yourself how the boundaries between human and machine have become increasingly blurred; understand how we got here, and take note of the increasing power of designers to influence the future of such technologies.

Hello, Robot has over 200 objects on display, and takes visitors through four stages of robot influence and evolution. Throughout, the exhibition poses provocative questions, and will require you to consider the past, present and future of robotics like never before. Reflection of this kind may prove essential in answering the question, how will robots exist in our next nature?

What? An exploration of robots in a human world
Where? V&A Dundee, Scotland (UK)
When? Now, until 09 February 2020

[post_title] => This exhibition looks at how robots are changing the world we live in [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => new-exhibition-hello-robot [to_ping] => [pinged] => [post_modified] => 2019-11-28 17:40:43 [post_modified_gmt] => 2019-11-28 16:40:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=125626 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[3] => WP_Post Object ( [ID] => 122400 [post_author] => 2194 [post_date] => 2019-10-07 14:14:46 [post_date_gmt] => 2019-10-07 13:14:46 [post_content] =>

A breakthrough technology that responds to signals from the brain has transformed the life of a paralyzed 28-year-old man called Thibault. Four years after the initial incident that paralysed him, Thibault is walking again. Thanks to a combination of revolutionary technology and immense brain power, he is able to operate a full-body exoskeleton.

“When you are in my position, when you can’t do anything with your body […] I wanted to do something with my brain,” Thibault said.

This is where the process began. He first trained his brain by using a video game avatar to help him develop the skills needed to operate an exoskeleton - this involved a long process of completely relearning and visualizing natural movements.

Thibault’s brain signals were then recorded by two devices, implanted either side of his head, between the brain and the skin. These read his sensorimotor cortex, the part of the brain that controls motor function.

Professor Alim Louis Benabid, leader of the trial at Grenoble Alps Hospital explains, “The brain is still capable of generating commands that would normally move the arms and legs, there’s just nothing to carry them out.'' This is where technology was able to provide the final piece of the puzzle. Moving from avatar to exoskeleton, over many training sessions Thibault has covered the distance of one and a half football pitches.

Experts involved in the study say their research may lead to the production of brain-controlled wheelchairs - a possibility revolutionary for those with restricted mobility. Thibault says the trial offers “a message of hope to people like me.”

This huge achievement disrupts morbid predictions of man being controlled by technology. Instead, therapeutic uses of this kind give us a positive model for creating technologies that facilitate human agency and determination in life-enriching ways.

[post_title] => This brain-controlled exoskeleton allows a paralyzed man to walk again [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => brain-controlled-exoskeleton-walk [to_ping] => [pinged] => [post_modified] => 2019-10-07 14:40:52 [post_modified_gmt] => 2019-10-07 13:40:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=122400 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[4] => WP_Post Object ( [ID] => 110332 [post_author] => 2022 [post_date] => 2019-04-11 09:21:01 [post_date_gmt] => 2019-04-11 08:21:01 [post_content] =>

From flocks of birds to fish schools in the sea, or towering termite mounds, many social groups in nature exist together to survive and thrive. This cooperative behaviour can be used by engineers as “bio-inspiration” to solve practical human problems, and by computer scientists studying swarm intelligence.

“Swarm robotics” took off in the early 2000s, an early example being the “s-bot” (short for swarm-bot). This is a fully autonomous robot that can perform basic tasks including navigation and the grasping of objects, and which can self-assemble into chains to cross gaps or pull heavy loads. More recently, “TERMES” robots have been developed as a concept in construction, and the “CoCoRo” project has developed an underwater robot swarm that functions like a school of fish that exchanges information to monitor the environment. So far, we’ve only just begun to explore the vast possibilities that animal collectives and their behaviour can offer as inspiration to robot swarm design.

Swarm behaviour in birds – or robots designed to mimic them? EyeSeeMicrostock/Shutterstock

Robots that can cooperate in large numbers could achieve things that would be difficult or even impossible for a single entity. Following an earthquake, for example, a swarm of search and rescue robots could quickly explore multiple collapsed buildings looking for signs of life. Threatened by a large wildfire, a swarm of drones could help emergency services track and predict the fire’s spread. Or a swarm of floating robots (“Row-bots”) could nibble away at oceanic garbage patches, powered by plastic-eating bacteria.

A future where floating robots powered by plastic-eating bacteria could tackle ocean waste. Shutterstock

Bio-inspiration in swarm robotics usually starts with social insects – ants, bees and termites – because colony members are highly related, which favours impressive cooperation. Three further characteristics appeal to researchers: robustness, because individuals can be lost without affecting performance; flexibility, because social insect workers are able to respond to changing work needs; and scalability, because a colony’s decentralised organisation is sustainable with 100 workers or 100,000. These characteristics could be especially useful for doing jobs such as environmental monitoring, which requires coverage of huge, varied and sometimes hazardous areas.

Social learning

Beyond social insects, other species and behavioural phenomena in the animal kingdom offer inspiration to engineers. A growing area of biological research is in animal cultures, where animals engage in social learning to pick up behaviours that they are unlikely to innovate alone. For example, whales and dolphins can have distinctive foraging methods that are passed down through the generations. This includes forms of tool use – dolphins have been observed breaking off marine sponges to protect their beaks as they go rooting around for fish, like a person might put a glove over a hand.

Bottlenose dolphin playing with a sponge. Some have learned to use them to help them catch fish. Yann Hubert/Shutterstock

Forms of social learning and artificial robotic cultures, perhaps using forms of artificial intelligence, could be very powerful in adapting robots to their environment over time. For example, assistive robots for home care could adapt to human behavioural differences in different communities and countries over time.

Robot (or animal) cultures, however, depend on learning abilities that are costly to develop, requiring a larger brain – or, in the case of robots, a more advanced computer. But the value of the “swarm” approach is to deploy robots that are simple, cheap and disposable. Swarm robotics exploits the reality of emergence (“more is different”) to create social complexity from individual simplicity. A more fundamental form of “learning” about the environment is seen in nature – in sensitive developmental processes – which do not require a big brain.

‘Phenotypic plasticity’

Some animals can change behavioural type, or even develop different forms, shapes or internal functions, within the same species, despite having the same initial “programming”. This is known as “phenotypic plasticity” – where the genes of an organism produce different observable results depending on environmental conditions. Such flexibility can be seen in the social insects, but sometimes even more dramatically in other animals.

Most spiders are decidedly solitary, but in about 20 of 45,000 spider species, individuals live in a shared nest and capture food on a shared web. These social spiders benefit from having a mixture of “personality” types in their group, for example bold and shy.

Social spider (Stegodyphus) spin collective webs in Addo Elephant Park, South Africa. PicturesofThings/Shutterstock

My research identified a flexibility in behaviour where shy spiders would step into a role vacated by absent bold nestmates. This is necessary because the spider colony needs a balance of bold individuals to encourage collective predation, and shyer ones to focus on nest maintenance and parental care. Robots could be programmed with adjustable risk-taking behaviour, sensitive to group composition, with bolder robots entering into hazardous environments while shyer ones know to hold back. This could be very helpful in mapping a disaster area such as Fukushima, including its most dangerous parts, while avoiding too many robots in the swarm being damaged at once.

The ability to adapt

Cane toads were introduced in Australia in the 1930s as a pest control, and have since become an invasive species themselves. In new areas cane toads are seen to be somewhat social. One reason for their growth in numbers is that they are able to adapt to a wide temperature range, a form of physiological plasticity. Swarms of robots with the capability to switch power consumption mode, depending on environmental conditions such as ambient temperature, could be considerably more durable if we want them to function autonomously for the long term. For example, if we want to send robots off to map Mars then they will need to cope with temperatures that can swing from -150°C at the poles to 20°C at the equator.

Cane toads can adapt to temperature changes. Radek Ziemniewicz/Shutterstock

In addition to behavioural and physiological plasticity, some organisms show morphological (shape) plasticity. For example, some bacteria change their shape in response to stress, becoming elongated and so more resilient to being “eaten” by other organisms. If swarms of robots can combine together in a modular fashion and (re)assemble into more suitable structures this could be very helpful in unpredictable environments. For example, groups of robots could aggregate together for safety when the weather takes a challenging turn.

Whether it’s the “cultures” developed by animal groups that are reliant on learning abilities, or the more fundamental ability to change “personality”, internal function or shape, swarm robotics still has plenty of mileage left when it comes to drawing inspiration from nature. We might even wish to mix and match behaviours from different species, to create robot “hybrids” of our own. Humanity faces challenges ranging from climate change affecting ocean currents, to a growing need for food production, to space exploration – and swarm robotics can play a decisive part given the right bio-inspiration.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

[post_title] => The social animals that are inspiring new behaviors for robot swarms [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => robot-swarms [to_ping] => [pinged] => [post_modified] => 2019-04-11 09:21:10 [post_modified_gmt] => 2019-04-11 08:21:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=110332 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[5] => WP_Post Object ( [ID] => 109864 [post_author] => 367 [post_date] => 2019-04-11 08:00:36 [post_date_gmt] => 2019-04-11 07:00:36 [post_content] =>

As we showed with HUBOT, we can use new technologies and robotics to make our work more enjoyable, interesting and humane. Aside from our speculative jobs, a lot of robotic companions already exist. Think about the robotic fish that observes marine life, the breathing pillow that helps you sleep and robotic arms that assist the surgeon. So why not join forces at Next Nature Network?

Not working against, but with robots!

Over the past few years, we have been actively growing a meaningful community on Instagram. Ever since 2016, the Instagram feed is curated by an algorithm that holds the magical powers to decide what you see. And, in doing so, the robots at Instagram decide what our online community gets to see.

This doesn’t sound like a humane technology to us.

Also, Instagram does not allow us to use the so-called 'swipe up' functionality that directly leads you to our stories on Nextnature.net.

So why do we keep using Instagram? Because it also bring us joy and convenience. Social media platforms are connecting us to like-minded people all over the world, and we value your engagement! It’s not all bad, but there is a lot of room for improvement.

We wish we could change Instagram’s algorithms and functionalities, but unfortunately we do not have that power (yet). What we can do, is make life easier for you, hence we decided to hire an army of robots to the rescue.

10.000 new colleagues

Meet our Robotic Follower. They are responsible for playing Instagram’s algorithms and unlocking new features. If they do the job well, you won’t even notice they're there. Thanks to their work, we can put our time and energy into creating great content for you. Because that's what we love to do. Let’s make inhumane technology more humane!

[post_title] => Why Next Nature Network is hiring an army of bots [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => army-of-bots [to_ping] => [pinged] => [post_modified] => 2019-04-11 09:13:34 [post_modified_gmt] => 2019-04-11 08:13:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=109864 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[6] => WP_Post Object ( [ID] => 110079 [post_author] => 1860 [post_date] => 2019-04-05 10:17:06 [post_date_gmt] => 2019-04-05 09:17:06 [post_content] =>

Would you pray to a Robot deity? A group of Japanese buddhists is already doing so. Meet Mindar, the robot divinity shaped after the buddhist Goddess of Mercy, also known as Kannon. Kyoto based Kodaiji Temple recruited Osaka University’s head of intelligent robotics Hiroshi Ishiguro to design this million dollar robot. Its purpose? To encourage youngsters to engage with buddhism again.

Mindar’s silicone face represents a gender neutrality that aims to move beyond human representation. Supplemented with aluminium body parts, its enmeshment with human construction becomes evident. And when Mindar is chanting the heart sutra, an eerie resemblance with Japan’s vocaloid Hatsune Miku comes to mind.

God is a Google

Mindar is not the only religious robotic to have been manufactured. In the past, we spotted robot monk Xian'er informing visitors of the Longquan temple, near Beijing. Creations like Mindar and the Xian'er can be understood as metaphors for the way humankind worships artificial intelligence: we worship contemporary technologies as if they were Gods. But aforementioned creations are part of a greater scheme, as the complexity of artificial intelligence forms a more-than-human network that reminds me of spirituality.

Think of a hyper-intelligent computing database like Google. Its systems are omniscient, they are built to know everything. They compress knowledge into a time-space continuum of all recorded human knowledge and activities. According to media scholar John Durham Peters, Google can therefore be understood as a lo-fi universe - a God.

The Church of AI

An even greater analogy comes into existence when considering the vast network of all computing systems ever built. They can be regarded a hyper-intelligent being or a God, if you will. With this in mind, Anthony Levandowski started the church of Artificial Intelligence named Way of the Future (WOTF).

WOTF’s main focus revolves around “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence developed through computer hardware and software.” The church emphasizes not to be affiliated with any company. Rather, it operates as an open source movement that allows everyone to contribute to the creation of the most intelligent being ever.

Levandowski stresses the importance to work on this God, as a so-called “Transition” will herald an epoch in which a hyper-intelligent being (hint: it’s not the human being) will be in charge of the world. This intelligent being will sense the world through the internet as its nervous centre, knowing everything that is happening anytime, anywhere. Levandowski converts the creepiness of this idea into the emergence of his AI church. In order for the Transition to take place in a serene way, an initiative like WOTF would be urgent to gain more control over this procedure.

Cybernetic spirituality

What if the Transition has already taken place? What if we’re more in need of a Way of the Now, rather than a way of the future? Pioneering cybernetic Stafford Beer already characterized control systems as spiritually charged networks in his 1966 essay about knowledge and God.

Cybernetics is about the way systems work in feedback loops; in a way, it is a predecessor of AI. It has contributed to many fields - control systems, electrical network theory, mechanical engineering, logic modeling, evolutionary biology, neuroscience, anthropology, and psychology - but was overshadowed by AI at some point.

In the aforementioned essay, Beer described man’s embedding in a cybernetic control system as follows: “To people reared in the good liberal tradition, man is in principle infinitely wise; he pursues knowledge to its ultimate . . .To the cybernetician, man is part of a control system.” This control system itself is not a code that can be simply cracked by humans - it is something hyper-intelligent that we can have no absolute knowledge about, exactly because it is smarter than we are.

Just like many contemporary technologies, this all-encompassing control system is signified by a black-boxed truth that we cannot uncover. Not because we’re not allowed to, as is the case with tech companies that reinforce policies of secrecy. Rather, there is a larger black box containing the connectivity that seeps through the vessels of contemporary networks. Perhaps that’s the deity already ruling our lives.

So, whether we like it or not, all of us taking part in modern technological systems are already praying to a hyper-intelligent God; our next nature is already present. Our prayers are answered with flickering mobile screens and information dumps that appear before us within the blink of an eye. Just like any other God, it does not stop wars, but it does grant us the gift of knowledge.

[post_title] => The religion named Artificial Intelligence [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => the-religion-named-artificial-intelligence [to_ping] => [pinged] => [post_modified] => 2019-04-05 11:11:45 [post_modified_gmt] => 2019-04-05 10:11:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=110079 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[7] => WP_Post Object ( [ID] => 108015 [post_author] => 873 [post_date] => 2019-02-07 17:49:34 [post_date_gmt] => 2019-02-07 16:49:34 [post_content] =>

Robot Sophia is pretty much the international face of the ‘modern robot’. Sophia is the Audrey Hepburn-inspired humanoid robot who stands out for multiple reasons. She is the first robot granted human citizenship (in Saudi Arabia) and has met up with more politicians and celebrities than one would meet in a lifetime. Sophia is advanced, yet not very portable and expensive.

Now the superstar robot has a little sister: Little Sophia, an educational companion for ages 7-13 getting kids excited for AI and coding learning. And in contrast to ‘big’ Sophia, her sister is a lot easier in use and more affordable (from $99).

The family resemblances are... uncanny (to throw a fitting metaphor on the table).

Little Sophia is something between a toy doll and a real robot, and is being developed by Hanson Robotics (the same company that also develops OG Sophia). Like Sophia the robot, her younger sibling can walk, talk, sing, play games and, tell jokes. She’s about 35 cm tall and looks as if Robocop had a baby with a Bratz doll.

Programmable via mobile app, she’s able to mirror the movement of its owner, making it both a fun toy and also an educational tool. Promising unparalleled levels of feedback for users, the idea is to create a robot friend and tutor, which will build up a lasting relationship with kids.

For now, we're applauding the developers for inspiring a broader, more inclusive generation of young kids to learn how to code.

“Our vision at Hanson Robotics is to bring robots to life,” said David Hanson, founder of Hanson Robotics, in a statement. "Robots will soon be everywhere. How can we nurture them to be our friends and useful collaborators?"

A vision we can only agree to (remember HUBOT).

The irresistible Little Sophia is currently up for adoption via a Kickstarter campaign.

[post_title] => Sophia the Robot has a sister: Little Sophia [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => little-sophia [to_ping] => [pinged] => [post_modified] => 2019-02-07 18:02:28 [post_modified_gmt] => 2019-02-07 17:02:28 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=108015 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[8] => WP_Post Object ( [ID] => 91461 [post_author] => 1790 [post_date] => 2019-01-07 17:06:48 [post_date_gmt] => 2019-01-07 16:06:48 [post_content] =>

Earth’s oceans are having a rough go of it these days. On top of being the repository for millions of tons of plastic waste, global warming is affecting the oceans and upsetting marine ecosystems in potentially irreversible ways.

Coral bleaching, for example, occurs when warming water temperatures or other stress factors cause coral to cast off the algae that live on them. The coral goes from lush and colorful to white and bare, and sometimes dies off altogether. This has a ripple effect on the surrounding ecosystem.

Warmer water temperatures have also prompted many species of fish to move closer to the north or south poles, disrupting fisheries and altering undersea environments.

To keep these issues in check or, better yet, try to address and improve them, it’s crucial for scientists to monitor what’s going on in the water. A paper released last week by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new tool for studying marine life: a biomimetic soft robotic fish, dubbed SoFi, that can swim with, observe, and interact with real fish.

[embed]https://youtu.be/Dy5ZETdaC9k[/embed]

SoFi isn’t the first robotic fish to hit the water, but it is the most advanced robot of its kind. Here’s what sets it apart.

It swims in three dimensions

Up until now, most robotic fish could only swim forward at a given water depth, advancing at a steady speed. SoFi blows older models out of the water. It’s equipped with side fins called dive planes, which move to adjust its angle and allow it to turn, dive downward, or head closer to the surface. Its density and thus its buoyancy can also be adjusted by compressing or decompressing air in an inner compartment.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” said CSAIL PhD candidate Robert Katzschmann, lead author of the study. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”

The team took SoFi to the Rainbow Reef in Fiji to test out its swimming skills, and the robo fish didn’t disappoint—it was able to swim at depths of over 50 feet for 40 continuous minutes. What keeps it swimming? A lithium polymer battery just like the one that powers our smartphones.

It’s remote-controlled… by Super Nintendo

SoFi has sensors to help it see what’s around it, but it doesn’t have a mind of its own yet. Rather, it’s controlled by a nearby scuba-diving human, who can send it commands related to speed, diving, and turning. The best part? The commands come from an actual repurposed (and waterproofed) Super Nintendo controller. What’s not to love?

obotic-swimming-fish-SoFi-close-up-remote-control
Image Credit: MIT CSAIL

Previous robotic fish built by this team had to be tethered to a boat, so the fact that SoFi can swim independently is a pretty big deal. Communication between the fish and the diver was most successful when the two were less than 10 meters apart.

It looks real, sort of

SoFi’s side fins are a bit stiff, and its camera may not pass for natural—but otherwise, it looks a lot like a real fish. This is mostly thanks to the way its tail moves; a motor pumps water between two chambers in the tail, and as one chamber fills, the tail bends towards that side, then towards the other side as water is pumped into the other chamber. The result is a motion that closely mimics the way fish swim. Not only that, the hydraulic system can change the water flow to get different tail movements that let SoFi swim at varying speeds; its average speed is around half a body length (21.7 centimeters) per second.

Besides looking neat, it’s important SoFi look lifelike so it can blend in with marine life and not scare real fish away, so it can get close to them and observe them.

“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.” said Cecilia Laschi, a biorobotics professor at the Sant'Anna School of Advanced Studies in Pisa, Italy.

Just keep swimming

It sounds like this fish is nothing short of a regular Nemo. But its creators aren’t quite finished yet.

They’d like SoFi to be able to swim faster, so they’ll work on improving the robo fish’s pump system and streamlining its body and tail design. They also plan to tweak SoFi’s camera to help it follow real fish.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” said CSAIL director Daniela Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

The CSAIL team plans to make a whole school of SoFis to help biologists learn more about how marine life is reacting to environmental changes.

Image Credit: MIT CSAIL

This article originally appeared on Singularity Hub, a publication of Singularity University.

[post_title] => Watch this lifelike robot fish swim through the ocean [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => robot-fish [to_ping] => [pinged] => [post_modified] => 2019-01-13 15:10:20 [post_modified_gmt] => 2019-01-13 14:10:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=91461 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[9] => WP_Post Object ( [ID] => 91628 [post_author] => 1425 [post_date] => 2018-12-06 11:01:54 [post_date_gmt] => 2018-12-06 10:01:54 [post_content] =>

Conventional wisdom says that you can’t replace the human touch in terms of medical care, but in our rapidly changing technological environment, it appears that this perception might be changing. Meet the robotics innovations around the world that are shaping tomorrow’s hospitals.

Doctor da Vinci?

The da Vinci Surgical System has been around for a while now, and you may already know it. If you’ve had surgery in the last few years, you may even have encountered it yourself.

The system is not an autonomous robot, but rather a highly advanced set of robotic arms, capable of holding tools, which are operated by the surgeon through a master control panel. The lack of need for direct manual control allows for a high degree of precision – something that is crucial for surgery.

The da Vinci System also allows for remote control. Since nobody needs to directly touch the tools, medical professionals have experimented with using the machine to perform surgery from a long distance. In 2001, the first transatlantic surgical procedure, Operation Lindbergh, was completed, with a surgeon in New York treating a patient all the way in Strasbourg!

This kind of so-called “telesurgery” is not the normal use of the system, but it points the way towards what could be the future of robotics in healthcare: remote treatments, and even autonomous robotic doctors.

Will you soon be treated by Surgeon-Bot 3000? Probably not, but certain developments do point towards these kinds of possibilities. Let’s take a look at the evolution of robotics in the years since the introduction of the da Vinci System.

Watson and friends

Take Watson, the AI system designed by tech company IBM. This system, whose other jobs include air-traffic controller, fashion designer, and farmer, is now breaking into the world of medicine. Watson has been assisting medical staff with the straightforward tasks that often distract from their more specialized work.

Answering patients’ basic questions and adjusting thermostats do not exactly require a medical degree, but hospital employees still usually have to spend a lot of their time on these tasks. Now Watson, in combination with a voice command system, can take over these basic tasks, leaving the staff free to focus on the human element.

But Watson has also been put to work on more complicated medical tasks. Japanese doctors were having trouble identifying exactly what was wrong with one of their patients. After trying all the conventional methods of diagnosis, they were stumped. So they asked Watson for help.

The AI was able to compare the patient’s genetic information to millions of other data points, and in just ten minutes diagnosed the rare form of leukemia from which she was suffering. The doctors claim that Watson effectively saved the woman’s life.

Elsewhere, voice command systems like Amazon’s Alexa are finding myriad uses in the world of healthcare. These systems can be used to deliver information from doctors to patients, and vice versa, as well as to remind staff of safety procedures. And in the UK, Google’s DeepMind technology is being put to use within the NHS.

But all these innovations lead towards the question: Can a robot truly be a medical professional? Chinese authorities, currently pushing hard on AI research in an effort to compete internationally, think the answer is yes.

Meet Xiaoyi, the only robot with a medical license

The artificial intelligence company iFlytek designed a robot named Xiaoyi, and it’s just passed its medical licensing exam. And Xiaoyi didn’t just scrape by but passed the test with flying colors, scoring 456 points when the amount required to pass is just 360.

Xiaoyi is an AI-powered robot designed to collect and analyze patient data. Representatives of iFlytek say that despite its newly acquired medical license, their robot is still designed to play a secondary role in the medical process. They aim to use Xiaoyi to improve efficiency in the Chinese medical system, claiming that it’s a much-needed aid for overstretched practitioners in many of China’s rural areas.

Tomorrow’s hospital

Xiaoyi doesn’t launch officially until next March, but its receipt of a medical license is something of a watershed moment for the application of robotics in healthcare. Taken alongside Watson, the da Vinci System, and other robotics innovations, it’s hard to avoid the conclusion that robots, autonomous or human-operated, are the next step in modern healthcare.

What might a trip to the hospital look like in a decade? They say nursing requires a human touch, but as our HUBOT project showed, this doesn’t necessarily preclude robotics. We’ve seen that the robotization of hospital administration and data entry is already beginning. And soon enough, patients in China might consult with Dr. Xiaoyi rather than their regular GP. How long until we see a fully autonomous da Vinci System?

In our next hospitals, we may be cared for by robots just as much as by humans. Will the nurses of the future be digital too?

[post_title] => Your next doctor might just be a robot [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => next-hospitals-robot-doctors-2 [to_ping] => [pinged] => [post_modified] => 2018-12-28 14:23:01 [post_modified_gmt] => 2018-12-28 13:23:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=91628 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 ))[post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 126400 [post_author] => 2319 [post_date] => 2020-01-06 14:53:14 [post_date_gmt] => 2020-01-06 13:53:14 [post_content] =>

Like it or loathe it, the robot revolution is now well underway and the futures described by writers such as Isaac Asimov, Frederik Pohl and Philip K. Dick are fast turning from science fiction into science fact. But should robots have rights? And will humanity ever reach a point where human and machine are treated the same?

At the heart of the debate is that most fundamental question: what does it mean to be human? Intuitively, we all think we know what this means – it almost goes without saying. And yet, as a society, we regularly dehumanise others, and cast them as animal or less than human – what philosopher Giorgio Agamben describes as “bare life”.

Take the homeless for example. People who the authorities treat much like animals, or less than animals (like pests) who need to be guarded against with anti-homeless spikes and benches designed to prevent sleep. A similar process takes places within a military setting, where enemies are cast as less than human to make them easier to fight and easier to kill.

Humans also do this to other “outsiders” such as immigrants and refugees. While many people may find this process disturbing, these artificial distinctions between insider and outsider reveal a key element in the operation of power. This is because our very identities are fundamentally built on assumptions about who we are and what it means to be included in the category of “human”. Without these wholly arbitrary distinctions, we risk exposing the fact that we’re all a lot more like animals than we like to admit.

Being human

Of course, things get a whole lot more complicated when you add robots into the mix. Part of the problem is that we find it hard to decide what we mean by “thought” and “consciousness” and even what we mean by “life” itself. As it stands, the human race doesn’t have a strict scientific definition on when life begins and ends.

Similarly, we don’t have a clear definition on what we mean by intelligent thought and how and why people think and behave in different ways. If intelligent thought is such an important part of being human (as some would believe), then what about other intelligent creatures such as ravens and dolphins? What about biological humans with below average intelligence?

These questions cut to the heart of the rights debate and reveal just how precarious our understanding of the human really is. Up until now, these debates have solely been the preserve of science fiction, with the likes of Flowers for Algernon and Do Androids Dream of Electric Sheep? exposing just how easy it is to blur the line between the human and non-human other. But with the rise of robot intelligence these questions become more pertinent than ever, as now we must also consider the thinking machine.

Machines and the rule of law

But even assuming that robots were one day to be considered “alive” and sufficiently intelligent to be thought of in the same way as human beings, then the next question is how might we incorporate them into society and how we might hold them to account when things go wrong?

Traditionally, we tend to think about rights alongside responsibilities. This comes as part of something known as social contract theory, which is often associated with political philosopher Thomas Hobbes. In a modern context, rights and responsibilities go hand-in-hand with a system of justice that allows us to uphold these rights and enforce the rule of law. But these principles simply cannot be applied to a machine. This is because our human system of justice is based on a concept of what it means to be human and what it means to be alive.

So, if you break the law, you potentially forfeit some part of your life through incarceration or (in some nations) even death. However, machines cannot know mortal existence in the same way humans do. They don’t even experience time in the same way as humans. As such, it doesn’t matter how long a prison sentence is, as a machine could simply switch itself off and remain essentially unchanged.

For now at least, there’s certainly no sign of robots gaining the same rights as human beings and we’re certainly a long way off from machines thinking in a way that might be described as “conscious thought”. Given that we still haven’t quite come to terms with the rights of intelligent creatures such as ravens, dolphins and chimpanzees, the prospect of robot rights would seem a very long way off.

The question then really, is not so much whether robots should have rights, but whether we should distinguish human rights from other forms of life such as animal and machine. It may be that we start to think about a cybernetic Bill of Rights that embraces all thinking beings and recognises the blurred boundaries between human, animal and machine.

Whatever the case, we certainly need to move away from the distinctly problematic notion that we humans are in some way superior to every other form of life on this planet. Such insular thinking has already contributed to the global climate crisis and continues to create tension between different social, religious and ethnic groups. Until we come to terms with what it means to be human, and our place in this world, then the problems will persist. And all the while, the machines will continue to gain intelligence.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

[post_title] => It could be time to start thinking about a cybernetic Bill of Rights [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => cybernetic-bill-of-rights [to_ping] => [pinged] => [post_modified] => 2020-01-06 14:53:15 [post_modified_gmt] => 2020-01-06 13:53:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://nextnature.net/?p=126400 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw [post_category] => 0 )[comment_count] => 0 [current_comment] => -1 [found_posts] => 156 [max_num_pages] => 16 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => 1 [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => 1 [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => a177558b4d769ef01ad6217d43298afc [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed )[compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ))
load more