UTF-8 https://feraios.blogspot.com/feeds/posts/default
Google

Sunday, April 12, 2015

WHITE SPACE AND SPECTRUM TECHNOLOGIES

CITIZENS CHAIRESTHAI,
TODAY IT IS GOING TO BE PRESENTED IN A SUMMARIZED WAY THE WHITE'S SPACE TECHNOLOGICAL ABILITIES.
WE ARE FOCUSED TO THIS THEME,BECAUSE IT IS BELIEVED THAT IT MIGHT GIVE A GREAT OPPORTUNITY FOR THE EUROPEANS AND MEDITERRANEANS TO LEAD AGAIN,THROUGH THE COMMUNICATION SECTOR.
THIS SECTOR WILL BE IN THE NEXT YEARS THE MAIN FIELD FOR HUMAN RACE TO CONNECT H2H OR M2M AND BOTH,ALL AROUND G(r)AIA THROUGH ONE NET (INTERNET) OR MORE AS OUR POLICIES ARE PROPOSING (SYNNET).

 THE DEFINITION OF WHITE SPACE,FROM DATA TRANSMISSION GLOSSARY CAN BE:

White space, in a communications context, refers to underutilized portions of the radio frequency (RF) spectrum. Large portions of the spectrum are currently unused, in particular the frequencies allocated for analog television and  those used as buffers to prevent interference between channels. 
In the United States, frequency allocations in the RF spectrum are made by the Federal Communications Commission (FCC). In November 2008, the FCC voted unanimously to make  unlicensed portions of the spectrum available for use.  At that time, at least three-quarters of the spectrum allocated for analog television was unused. These frequencies will become available once the changeover to digital television is complete in February 2009. 
White space allocation is expected to stimulate development of wireless technologies and services.  According to Google co-founder Larry Page, white space operation will be like "Wi-Fi on steroids," because the signals in that portion of the spectrum have much longer ranges than those currently used for Wi-Fi. The increase in range means that fewer base stations will be required to give better coverage; that increased efficiency, in turn, should yield better service at lower costs. Signals in the white space range can also penetrate through solid objects better, which should yield more reliable service. 
Opponents of white space allocation have argued that it could lead to unexpected instances of disruptive and potentially dangerous interference between different services using the same frequencies at the same time. The FCC is testing white space devices designed to operate in the newly available frequencies to ensure that they will not cause interference. 
According to the FCC, wireless microphones and other low-power auxiliary stations will be able to continue to operate in bands below 700MHz.
SOURCE http://whatis.techtarget.com



ALTHOUGH THE LATEST DEFINITION AND FREQUENCY ALLOCATIONS STANDARDS ARE COMING FROM USA,THE INVENTION IS EUROPEAN. 

Brief

Whilst some new ideas are just evolutionary, some are revolutionary. The idea of being able to access wireless spectrum only when you need it represents a revolutionary shift in spectrum regulation.
This paradigm responds to the growing congestion of our radio spectrum, prompted by heavier usage of products like the iPad and content such as streaming video. This is the basis for what is termed ‘cognitive radio’ and we now see the first implementation of this in the cognitive use of the ‘white space’ spectrum vacated as television delivery moves from the old analogue to the new digital format.

Approach

Cambridge Consultants has been a pioneer in white space and cognitive radio. We were the first company to develop an operational wireless link in the Microsoft white space trial in Cambridge. Transmitting at 2Mb/s between our headquarters in the Science Park and the village of Cottenham we publicised this achievement by making the first tweet over white space. in addition, we have developed a database emulator that estimates how much white space is available in the UK.
We have also developed compelling spectrum sensing technology which will be a vital component of future cognitive radio solutions. Presently, white space is only approved by regulators such as the FCC in combination with databases to manage spectrum availability. In the future, when spectrum becomes less available still, real-time sensing technologies will be required.

Benefit

As a method of spectrum access, white space spectrum is an enabler to product innovation rather than a technology in itself. The concept of a database to control spectrum allocation to a given user means that it is important to recognise the business opportunities that best fit this methodology. This use of ‘borrowed’ spectrum will lend itself well to the transfer of discontinuous data such as download of a video or for machine to machine (M2M.)
Cambridge Consultants can provide access to cognitive radio technology as well as impartial insight and technical evaluation of the efficacy of new cognitive radio concepts, helping to highlight the business models to embrace and which to avoid.

SOURCE  http://www.cambridgeconsultants.com




















































A BETTER HISTORICAL ANALYSIS IS PRESENTED AT  WIKI 

ON 20/03/2015 OUR ePAPHOS ADVISORY TEAMWORK ,PARTICIPATED AT THE WORKSHOP
ABOUT  "GEO-LOCATION DATABASES",WHERE ACTUALLY APART FROM THE DIALOG-CONSULTATION FOR SHARING THE DATA (LSA),IT WAS DISCUSSED ALSO THE WHITE SPACE POLICIES FOR EUROPE.

STREAMING VIDEO FOR GEO-LOCATION DATABASES

THE GATHERING  WAS ORGANIZED BY DG NETWORKS/CONTENT AND TECHNOLOGY.
OUR CONTRIBUTION TO THE EUROPEAN CITIZENS WAS THAT WE  PUT SOME QUESTIONS,WHICH ARE DIRECTING TO SPECIFIC STRATEGIES.
 IT IS BELIEVED THAT IF THE EUROPEAN COMMISSION'S ORGANS ARE GOING TO ADAPT THEM,IT MIGHT PUT THE BASE FOR AN INDEPENDENT,LEADING  EUROPEAN  COMMUNICATION POLICIES,IN THE NEAR FUTURE.

FOR REASONS WHICH AREN'T KNOWN THE  QUESTIONS PUT AREN'T VISIBLE AND CANNOT BE HEARD,BUT THE RESPONSES ARE.
THAT'S WHY WE SHALL BRING THEM TO THE CITIZENS EYES BY WRITING THEM DOWN.

AT THE 50.15 MINUTES WE HAVE ASKED ABOUT 

1)DO THE EUROPEAN CITIZENS and NATIONAL AUTHORITIES,HAVE ANY BENEFIT FROM THE ECONOMIC POINT OF VIEW,FROM THESE SHARING TECHNOLOGIES ?

2)WHAT ABOUT THE PRIVACY AND SECURITY MEASURES WHICH HAVE BEEN TAKEN IN ORDER THE GEO-DATA AND LSA TO BE PROTECTED,AS IT LOOKS THAT THESE ARE CASE SENSITIVE ISSUES?
WHAT IT IS MEANT THE TERM "protection requirements for primary use "? 



ON THE  3:06:20 MINUTES WE QUESTIONED ABOUT :

1)ARE THERE ANY OTHER EUROPEAN COUNTRIES WHICH ARE DEVELOPING THIS TECHNOLOGY,OR IS IT ONLY UK?

2)AGAIN A TECHNOLOGY LIKE THESE WHICH WERE PRODUCED BY NOKIA,SKYPE AND LOTS OF OTHER  EU INVENTIONS, HAS BEEN TAKEN BY USA COMPANIES (GOOGLE,MICROSOFT ETC) AND IS BEING DEVELOPED BY THEM,WITH US THE EUROPEANS TO SIT,WATCH  AND PAY OUR MILITARY ALLIES TO IMPLEMENT AND REGULATE THEM.


AGAIN WE ARE LOSING THE COMPETITIVE ADVANTAGE,THIS ONE OF INNOVATION.

HOW ARE WE GOING TO REACT,WHEN PRIVATE USA COMPANIES ALREADY ARE EXPERIMENTING OUT OF THEIR COUNTRY?
WHAT SHALL WE DO FOR THE EXPANSION OF THIS TECHNOLOGY IN ASIA,AFRICA AND OTHER ZONES,WHICH MIGHT CONTRIBUTE TO OUR ECONOMIES?

3)WHAT ARE THE  STRATEGIC POLICIES OF THE USA REGULATORY BODIES (FCC),AND WHAT WILL BE OUR STRATEGY FOR EUROPEAN LANDS?

THE ANSWERS CAN BE HEARD AND SEEN AT THE VIDEO AFTER THE INVISIBLE  MAN'S QUESTIONS.

CLOSING THIS ARTICLE WE PRESENT THE PRACTICAL BENEFITS
THANKS FOR YOUR ATTENTION AND SUPPORT



Labels:

Thursday, March 12, 2015

IN ECONOMICS THERE'S NO MAGIC AND THE PIPER HAS TO BE PAID SOONER OR LATER



CITIZENS CHAIRESTHAI,
TODAY IT IS GOING TO BE PRESENTED IN A SUMMARIZED WAY THE WHITE'S SPACE TECHNOLOGICAL ABILITIES.
WE ARE FOCUSED TO THIS THEME,BECAUSE IT IS BELIEVED THAT IT MIGHT GIVE A GREAT OPPORTUNITY FOR THE EUROPEANS AND MEDITERRANEANS TO LEAD AGAIN,THROUGH THE COMMUNICATION SECTOR.
THIS SECTOR WILL BE IN THE NEXT YEARS THE MAIN FIELD FOR HUMAN RACE TO CONNECT H2H OR M2M AND BOTH,ALL AROUND G(r)AIA THROUGH ONE NET (INTERNET) OR MORE AS OUR POLICIES ARE PROPOSING (SYNNET).

  THE DEFINITION OF WHITE SPACE,FROM DATA TRANSMISSION GLOSSARY CAN BE:

White space, in a communications context, refers to underutilized portions of the radio frequency (RF) spectrum. Large portions of the spectrum are currently unused, in particular the frequencies allocated for analog television and  those used as buffers to prevent interference between channels. 
In the United States, frequency allocations in the RF spectrum are made by the Federal Communications Commission (FCC). In November 2008, the FCC voted unanimously to make  unlicensed portions of the spectrum available for use.  At that time, at least three-quarters of the spectrum allocated for analog television was unused. These frequencies will become available once the changeover to digital television is complete in February 2009. 
White space allocation is expected to stimulate development of wireless technologies and services.  According to Google co-founder Larry Page, white space operation will be like "Wi-Fi on steroids," because the signals in that portion of the spectrum have much longer ranges than those currently used for Wi-Fi. The increase in range means that fewer base stations will be required to give better coverage; that increased efficiency, in turn, should yield better service at lower costs. Signals in the white space range can also penetrate through solid objects better, which should yield more reliable service. 
Opponents of white space allocation have argued that it could lead to unexpected instances of disruptive and potentially dangerous interference between different services using the same frequencies at the same time. The FCC is testing white space devices designed to operate in the newly available frequencies to ensure that they will not cause interference. 
According to the FCC, wireless microphones and other low-power auxiliary stations will be able to continue to operate in bands below 700MHz.
SOURCE http://whatis.techtarget.com













































ALTHOUGH THE LATEST DEFINITION AND FREQUENCY ALLOCATIONS STANDARDS ARE COMING FROM USA,THE INVENTION IS EUROPEAN. 

Brief

Whilst some new ideas are just evolutionary, some are revolutionary. The idea of being able to access wireless spectrum only when you need it represents a revolutionary shift in spectrum regulation.
This paradigm responds to the growing congestion of our radio spectrum, prompted by heavier usage of products like the iPad and content such as streaming video. This is the basis for what is termed ‘cognitive radio’ and we now see the first implementation of this in the cognitive use of the ‘white space’ spectrum vacated as television delivery moves from the old analogue to the new digital format.

Approach

Cambridge Consultants has been a pioneer in white space and cognitive radio. We were the first company to develop an operational wireless link in the Microsoft white space trial in Cambridge. Transmitting at 2Mb/s between our headquarters in the Science Park and the village of Cottenham we publicised this achievement by making the first tweet over white space. in addition, we have developed a database emulator that estimates how much white space is available in the UK.
We have also developed compelling spectrum sensing technology which will be a vital component of future cognitive radio solutions. Presently, white space is only approved by regulators such as the FCC in combination with databases to manage spectrum availability. In the future, when spectrum becomes less available still, real-time sensing technologies will be required.

Benefit

As a method of spectrum access, white space spectrum is an enabler to product innovation rather than a technology in itself. The concept of a database to control spectrum allocation to a given user means that it is important to recognise the business opportunities that best fit this methodology. This use of ‘borrowed’ spectrum will lend itself well to the transfer of discontinuous data such as download of a video or for machine to machine (M2M.)
Cambridge Consultants can provide access to cognitive radio technology as well as impartial insight and technical evaluation of the efficacy of new cognitive radio concepts, helping to highlight the business models to embrace and which to avoid.

SOURCE  http://www.cambridgeconsultants.com





























































A BETTER HISTORICAL ANALYSIS IS PRESENTED AT  WIKI 

ON 20/03/2015 OUR ePAPHOS ADVISORY TEAMWORK ,PARTICIPATED AT THE WORKSHOP
ABOUT  "GEO-LOCATION DATABASES",WHERE ACTUALLY APART FROM THE DIALOG-CONSULTATION FOR SHARING THE DATA (LSA),IT WAS DISCUSSED ALSO THE WHITE SPACE POLICIES FOR EUROPE.

STREAMING VIDEO FOR GEO-LOCATION DATABASES



  THE GATHERING  WAS ORGANIZED BY DG NETWORKS/CONTENT AND TECHNOLOGY, WE HAVE PUT SOME QUESTIONS,WHICH IT IS BELIEVED THAT IF THE EUROPEAN COMMISSION'S ORGANS ARE GOING TO ADAPT,IT MIGHT PUT THE BASE FOR AN INDEPENDENT,LEADING  EUROPEAN  COMMUNICATION POLICIES,AT THE NEAR FUTURE.
FOR REASONS WHICH AREN'T KNOWN THE  QUESTIONS PUT AREN'T VISIBLE AND CANNOT BE HEARD,BUT THE RESPONSES ARE.
THAT'S WHY WE SHALL BRING THEM TO THE CITIZENS EYES BY WRITING THEM DOWN.

AT THE 50.15 MINUTES WE HAVE ASKED ABOUT 

1)DO THE EUROPEAN CITIZENS and NATIONAL AUTHORITIES,HAVE ANY BENEFIT FROM THE ECONOMIC POINT OF VIEW,FROM THESE SHARING TECHNOLOGIES ?

2)WHAT ABOUT THE PRIVACY AND SECURITY MEASURES WHICH HAVE BEEN TAKEN IN ORDER THE GEO-DATA AND LSA TO BE PROTECTED,AS IT LOOKS THAT THESE ARE CASE SENSITIVE ISSUES?
WHAT IT IS MEANT THE TERM "protection requirements for primary use "? 



ON THE  3:06:20 MINUTES WE QUESTIONED ABOUT :

1)ARE THERE ANY OTHER EUROPEAN COUNTRIES WHICH ARE DEVELOPING THIS TECHNOLOGY,OR IS IT ONLY UK?

2)AGAIN A TECHNOLOGY LIKE NOKIA,SKYPE AND LOTS OF OTHER PRODUCTIVE INVENTIONS, HAS TAKEN BY USA COMPANIES (GOOGLE,MICROSOFT ETC) AND IS BEING DEVELOPED BY THEM,WITH US THE EUROPEANS TO SIT,WATCH  AND PAY OUR ALLIES TO IMPLEMENT AND REGULATE THEM.

AGAIN WE ARE LOSING THE COMPETITIVE ADVANTAGE,THIS ONE OF INNOVATION.
HOW ARE WE GOING TO REACT?
WHAT SHALL WE DO FOR THE EXPANSION OF THIS TECHNOLOGY IN ASIA,AFRICA AND OTHER ZONES,WHICH MIGHT CONTRIBUTE TO OUR ECONOMIES?

3)WHAT ARE THE  STRATEGIC POLICIES OF THE USA REGULATORY BODIES (FCC),AND WHAT WILL BE OUR STRATEGY FOR EUROPEAN LANDS?

THE ANSWERS CAN BE HEARD AND SEEN AT THE VIDEO AFTER THE INVISIBLE  MAN'S QUESTIONS.

CLOSING THIS ARTICLE WE PRESENT THE PRACTICAL BENEFITS
THANKS FOR YOUR ATTENTION AND SUPPORT


Labels:

Friday, January 17, 2014

PHOTONS LIKE PHAOS-ΦΩΣ

HAPPY NEW YEAR,
WITH  HYGEIA FOR ALL,

DEAR FELLOW READERS CHAIRESTAI,
YESTERDAY OUR TEAMWORK PARTICIPATED TO THE HORIZON 2020 FUNDING OPPORTUNITIES  EVENT FOR PHOTONICS,AT DG CONNECT/UNIT PHOTONICS  PREMISES,BRUSSELS ( https://twitter.com/epaphosinfo/status/423748723454517248 ).
DURING THE CROWDED CONFERENCE  BY VARIOUS  STAKEHOLDERS ,WERE PRESENTED THE  STRATEGIC  POLICIES AND WAYS FOR FUNDING BY THE SKILLFUL DG UNIT TEAM.
UNIVERSITIES AND INDUSTRIES PROPOSED FOR COOPERATION INNOVATIVE   TECHNICAL IDEAS  WHICH BY THE TIME WILL MAKE EUROPE A LEADER TO THE SECTOR,IF THESE ARE GOING TO BE IMPLEMENTED PROPERLY .
COPY AND PASTE PROJECTS AS THE UNIT'S STUFF RIGHTLY NOTICED,BY ITS EXPERIENCE AT THE PREVIOUS FP5,6 AND 7 PROGRAMS,AREN'T GOING TO BE ACCEPTED ANY MORE BY THE EVALUATORS.
OUR CONTRIBUTION ,POINTED  OUT AND ANALYZED IN BRIEF  3 POINTS,WHICH WEREN'T MENTIONED INTO THIS YEAR'S CALL:

1)HEALTH AFFECTS AND IMPLICATIONS STUDIES FOR THE RESEARCH AND DEVELOPMENT OF NEW LIGHTING MATERIALS (OLED ETC)

2)SEMICONDUCTOR  SECTOR WHICH IS TOTALLY DEPENDED BY OTHER G(R)AIAN ZONES MANUFACTURERS,WHEN EUROPE SHOULD BE ,AT THE FIRST LINES  IN THE HIGHLY VALUED PRODUCTION  GLOBAL CHAIN.
EVERYTHING IN THE NEW CIVILIZATION IT WILL BE BASED ON CHIPS,AND SO IN SEMICONDUCTORS...

3)QUANTUM INFORMATION WHEN IT EMPLOYS PHOTONIC  METHODS AND  OPTICS


IT IS WISHED THAT THIS IMPORTANT FOR  THE EUROPEANS SCIENTIFIC-INDUSTRIAL SECTOR WILL CONTRIBUTE IN THE NEAR FUTURE FOR RE-ESTABLISHING OUR FACTORIES ,VIRTUAL OR NOT ,AGAIN AT  THE EUROPEAN LANDS AS IT WAS MENTIONED AT  PREVIOUS ARTICLES OF OURS SOME YEARS AGO . 1

IRENE,SALAM,PEACE
A.C.

New Year, New Horizon 2020

Small businesses, scientists and citizens north and south of Ireland would be right in thinking Christmas came early this year. On 10 December in Dublin, European Commissioner Maire Geoghegan-Quinn launched Horizon 2020, the new €79bn and largest ever EU fund set up to support research and innovation at large.
Something for everyone  
According to the commissioner “everyone involved in Horizon 2020 has reason to celebrate” because it is not strictly constrained to science. Instead, Horizon 2020 will focus on three distinct yet mutually reinforcing priorities:
Excellent Science: with €24.6 billion budgeted to support the EU’s position as a world leader in science.
Competitive Industries: with €17.9 billion earmarked to secure industrial leadership in innovation, which includes a major investment in key technologies, as well as greater access to capital and support for SMEs.
Societal Challenges: with €31.7 billion aimed at addressing major concerns shared by all Europeans, across six key themes: health, demographic change and well-being; food security, sustainable agriculture, marine and maritime research and the bio-economy; secure, clean and efficient energy; smart, green and integrated transport; climate action, resource efficiency and raw materials; and inclusive, innovative and secure societies.
No red tape wrapping
In response to criticism of its forerunner, the 7th Framework Programme for Research and Technological Development (FP7), Horizon’s application procedure has been streamlined. There will be one online system for applying as well as running and reporting on projects. This is welcome news for those tired of getting tied up in red tape, not least SMEs. In fact, particular emphasis has been placed on getting SMEs involved – 20% of the budget in the industry and society pillars has been set aside for them specifically.
Irish SMEs will be able to engage in large collaborative projects or seek support through a new dedicated SME instrument for highly innovative smaller companies. A risk finance support for SMEs is also being put in place to generate commercial value from their research, resulting in economic growth and job creation.
Across the miles  
Projects that demonstrate international collaboration will have a greater chance of success in Horizon 2020. In this regard, Ireland is already at an advantage over its European competitors. One island, one language mean that applicants here won’t have to go too far to create cohorts. Making the most of its cross-border location, theSmart ECO Hub can help to identify and match partners.
Raise your game before your glass
The Irish government wants to draw down a minimum of €1.25bn of Horizon 2020 funding over its seven-year lifespan – the equivalent of €3m a week coming into the country. But to achieve that target, Geoghegan-Quinn says Irish applicants will need to up their game to compete on the European stage.
Dublin in December was undoubtedly different this year. To extend that difference into the long-term for economic growth within and beyond our capital city, Ireland’s innovators will need to embrace Brussels’ billion euro bonus and make the New Year a year of new horizons.
Article written for the magazine and newsletter of Smart ECO Hub, a project part financed by the European Union Regional Development Fund


NOTES

1    SOLAR POWER

      LIGHT WAVES

      MORE LIGHT


CSIRO, University of Tasmania scientists fit tiny sensors onto honey bees to study behaviour and population decline


CSIRO, University of Tasmania scientists fit tiny sensors onto honey bees to study behaviour and population decline


photo source https://plus.google.com/u/0/+angeloscharlaftis/posts

Labels:

Sunday, August 19, 2012

ROBOT ETHICS



Morals and the machine

IN THE classic science-fiction film “2001”, the ship's computer, HAL, faces a dilemma. His instructions require him both to fulfil the ship's mission (investigating an artefact near Jupiter) and to keep the mission's true purpose secret from the ship's crew. To resolve the contradiction, he tries to kill the crew.




As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was.
A bestiary of robots
Military technology, unsurprisingly, is at the forefront of the march towards self-determining machines (see Technology Quarterly). Its evolution is producing an extraordinary variety of species. The Sand Flea can leap through a window or onto a roof, filming all the while. It then rolls along on wheels until it needs to jump again. RiSE, a six-legged robo-cockroach, can climb walls. LS3, a dog-like robot, trots behind a human over rough terrain, carrying up to 180kg of supplies. SUGV, a briefcase-sized robot, can identify a man in a crowd and follow him. There is a flying surveillance drone the weight of a wedding ring, and one that carries 2.7 tonnes of bombs.
Robots are spreading in the civilian world, too, from the flight deck to the operating theatre . Passenger aircraft have long been able to land themselves. Driverless trains are commonplace. Volvo's new V40 hatchback essentially drives itself in heavy traffic. It can brake when it senses an imminent collision, as can Ford's B-Max minivan. Fully self-driving vehicles are being tested around the world. Google's driverless cars have clocked up more than 250,000 miles in America, and Nevada has become the first state to regulate such trials on public roads. In Barcelona a few days ago, Volvo demonstrated a platoon of autonomous cars on a motorway.
As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.
As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately—in other words, to tell right from wrong.
One way of dealing with these difficult questions is to avoid them altogether, by banning autonomous battlefield robots and requiring cars to have the full attention of a human driver at all times. Campaign groups such as the International Committee for Robot Arms Control have been formed in opposition to the growing use of drones. But autonomous robots could do much more good than harm. Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat. Driverless cars are very likely to be safer than ordinary vehicles, as autopilots have made planes safer. Sebastian Thrun, a pioneer in the field, reckons driverless cars could save 1m lives a year.
Instead, society needs to develop ways of dealing with the ethics of robotics—and get going fast. In America states have been scrambling to pass laws covering driverless cars, which have been operating in a legal grey area as the technology runs ahead of legislation. It is clear that rules of the road are required in this difficult area, and not just for robots with wheels.
The best-known set of guidelines for robo-ethics are the “three laws of robotics” coined by Isaac Asimov, a science-fiction writer, in 1942. The laws require robots to protect humans, obey orders and preserve themselves, in that order. Unfortunately, the laws are of little use in the real world. Battlefield robots would be required to violate the first law. And Asimov's robot stories are fun precisely because they highlight the unexpected complications that arise when robots try to follow his apparently sensible rules. Regulating the development and use of autonomous robots will require a rather more elaborate framework. Progress is needed in three areas in particular.
Three laws for the laws of robotics
First, laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident. In order to allocate responsibility, autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary. This has implications for system design: it may, for instance, rule out the use of artificial neural networks, decision-making systems that learn from example rather than obeying predefined rules.
Second, where ethical systems are embedded into robots, the judgments they make need to be ones that seem right to most people. The techniques of experimental philosophy, which studies how people respond to ethical dilemmas, should be able to help. Last, and most important, more collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices. Both ethicists and engineers stand to benefit from working together: ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to reassure society that they are not taking any ethical short-cuts.
Technology has driven mankind's progress, but each new advance has posed troubling new questions. Autonomous machines are no different. The sooner the questions of moral agency they raise are answered, the easier it will be for mankind to enjoy the benefits that they will undoubtedly bring.


 The Ethical and Social Implications of Robotics
Robots will be all over the place in a couple of decades, not to destroy us in Terminator fashion but to clean our houses, take care of our elderly or sick, play with and teach our children, and yes, have sex with us. If you wonder about the implications of such scenarios, read this book. It contains careful reflections -- sometimes enthusiastic, sometimes cautious -- about the many psychological, ethical, legal and socio-cultural consequences of robots engineered to play a major role in war and security, research and education, healthcare and personal companionship in the foreseeable future. The book contains contributions from many of the key participants in the discussions about robot ethics which began in the twenty-first century. Their papers are significant in their own right, but they gain more value from the clear organization of the book, which presents a succinct overview of the primary strands of the field. In eight parts, each consisting of three chapters, the reader is introduced to a specific topic and then confronted with some of the current issues, positions and problems that have arisen.
The first three chapters provide the reader with a general introduction to robotics, ethics and the various specificities of robot ethics. Together with the second section, on the design and programming of robots, they provide the necessary background for those unfamiliar with the particulars of robot ethics. Especially relevant is Colin Allen and Wendell Wallach's chapter that nicely summarizes the main point of their seminal book Moral Machines (2008)[1]. Allan and Wallach suggest that a 'functional morality', i.e., machines with the capacity to assess and respond to moral challenges, is not only possible but required. In order to perform their complex tasks in everyday environments, robots will need a considerable degree of autonomy. They approvingly quote (on p. 56) Rosalind Picard: "The greater the freedom of a machine, the more it will need moral standards".[2] They go beyond their summary in categorizing the different critiques their book encountered and addressing them in the remainder of the chapter in a refreshingly honest and constructive way. For instance, they admit to being 'guilty as charged' to the criticism that they may have contributed to the illusion that there is a technological fix to the dangers AI poses: "We should have spent more time thinking about the contexts in which (ro)bots operate and about human responsibility for designing those contexts." (p. 65).
There is a similar constructive openness in the other two chapters that explore the close connections between religion and morality. James Hughes attempts to draw lessons from a Buddhist framework for the attempt to create morally responsible machines, but it seems fair to say that his chapter remains quite general and much hinges on the still distant possibility of creating conscious, self-aware machine minds. In contrast, Selmer Bringsjord and Joshua Taylor become very specific and technical in their discussion of a 'divine-command computational logic', a computational natural-deduction proof theory "intended for the ethical control of a lethal robot on the basis of perceived divine commands." (p. 93).
The sentiments of Noel Sharkey that "robots will change the way that wars are fought" (p. 111), coupled with news reports from around the world of 'Predator drones attacking foreign soil,' sets an ominous tone from the outset of Section 3 on military robots. In addition to taking an overview of a number of current technologies like the MAARS (Modular Advanced Armed Robotics System) and the SWORD (Special Weapons Observation Reconnaissance Detection System), and the push (mainly by the US military) for the emergence of fully autonomous robotic weapons, Sharkey in Chapter 7 identifies a number of ethical issues like the proportionality of force and how robotic weapons might fit within current ethical frameworks. One ethical issue which is particularly striking is the question of whether a robot should be allowed to autonomously identify and kill (suspected) enemy combatants. For us at least an inner conflict arises. On the one hand the idea of robots replacing soldiers could be commended from the standpoint of a person who does not want to see their fellow countrymen killed in combat. On the other hand, however, the idea of robots making life and death decisions seems extremely risky, particularly (but not limited to) when we consider the ethical implications if a robot were to make a mistake and kill a civilian.
The idea of combatant identifications is further developed in Chapter 8 by Marcello Guarini and Paul Bello who note that combat identification is exacerbated by today's counter insurgency brand of warfare. Gone are the days of 'total war' where enemies faced each other en-mass on the battlefield in clearly defined uniforms. In today's theatres of war combatants can blend in with non-combatants, meaning one's ability to identify 'who an enemy is' becomes a tricky task. As a result, soldiers are actively being forced to make snap judgments about a person through their behavior by ascribing mental states to them (p. 131). In this type of warfare where intuitions are key, the ultimate question is whether a robot could be as good as a human at sensing and evaluating a situation and acting on intuitions.
The problems noted by Sharkey, Guarini and Bello regarding the ethical implications of mistakes made in the theatre of war seem to reach a natural crescendo in the form of the issue of responsibility, which Gert-Jan Lokhorst and Jeroen van den Hoven tackle in Chapter 9. They provide a rigorous account and overview of responsibility and consider where the line might lie in terms of when responsibility could shift between designers and the robot itself.
Section 4 attempts to cover a wide range of issues regarding the law and governance of robotics. In Chapter 10 Richard O'Meara gives us an insight as to how we could extend current legal infrastructures like the Geneva Convention to robots (as a starting point) in order to create a framework for the governance of robots to account for their growing sophistication and increasingly larger deployment in the theatre of war. In Chapter 11 Peter Asaro considers how a number of crucial legal concepts like responsibility, culpability, causality and intentionality might be applied to new cases of tele-operated, semi-autonomous and fully autonomous robots. It is worth noting that the coverage of all three levels of autonomy of robots is particularly impressive. In Chapter 12 Ryan Calo takes an overview of a number of issues churned up by robots and their implications on privacy. Calo focuses mainly on how the increased risk of hacking, due to more robots in our lives, potentially opens the door for hackers to covertly view and participate in our private lives. He then moves on to how the increasing surveillance potential of robots effects constitutional rights under the Fourth Amendment against unreasonable government intrusions in the private sphere.
Unlike Section 3, where it is easy to find a golden thread between each chapter, the chapters in section 4 seem a little more disjointed from one another. Whilst all the chapters are linked by the idea of governance and regulation, the variety of legal subject matter is very broad. To move from the governance of military robots (chapter 10) to the extension of jurisprudential concepts to robots (chapter 11), and to then jump to robots and privacy (chapter 12), seems like too much material is attempted to be covered with no opportunity for a substantive discussion in either area. This should not be taken as a criticism of the texts themselves; they are all well written, engaging and, in the space the authors have available, very good. But while it might not have been the intention of the editors to make connections between the various chapters, the lack of connection between the chapters means there is a lack of 'oomph' to the section, making it seem a little watered down.
Emotional and sexual relationships between humans and robots are the topic of section 5. Matthias Scheutz clearly identifies the danger that robots specifically designed for eliciting human emotions and feelings could lead to emotional dependency or even harm. Several experiments are discussed that show that humans are affected by a robot's presence in a way "that is usually only caused by the presence of another human." (p. 210). However, in the case of human-robot interaction, the emotional bonds are unidirectional and could be exploited by, e.g., companies that make their robots "convince the owner to purchase products the company wishes to promote." (p. 216). David Levy looks at the issue of future robot prostitutes. After discussing the reasons for (especially) men to pay (mostly) women for sex, Levy considers five aspects of the ethics of robot prostitution. Unfortunately these aspects receive a rather cursory treatment. For instance, he compares sexbots to vibrators and argues from the widespread acceptance of the latter that objecting to the former would be 'anomalous' (p. 227).
However, it seems that he is ignoring the unidirectional emotional bonds discussed in the chapter before by Scheutz. What makes sexbots genuinely different is their ability to tap into our social interaction capacities, sensitivities and vulnerabilities. The importance of this comes also to the fore in Levy's discussion of the ethics of using robot prostitutes vis-à-vis one's partner. He speaks of "the knowledge that what is taking place is nothing 'worse' than a form of masturbation" (p. 228), thereby again missing the fact that these sexbots will have certain looks and behavioral styles that may lead to emotional consequences for both the user and his/her partner. One would expect a discussion of such delicate issues to focus on the potential differences between robotic and standard sex toys (or at least argue they don't exist), and not on the assumption that they will be similar in most relevant aspects.
Blay Whitby directly addresses Levy when he considers how social isolation might drive people to robots for love and affection. Whitby says, "peaceful, even loving, interaction among humans is a moral good in itself", and "we should distrust the motives of those who wish to introduce technology in a way that tends to substitute for interaction between humans." (p. 238). He therefore suggests that robot lovers and caregivers are political topics, rather than simply technological. Whatever one may think about the particular positions and arguments that are presented in this section, the discussion in itself, though possibly distasteful to some, will remain with us for a long time to come.
Section 6 brings us back to, as its introduction states, the "more serious interaction" (p. 249) between robots and humans, in the form of companionship and medical care. Jason Borenstein & Yvette Pearson examine whether robot caregivers will lead to a reduction in human contact for members of society that tend to be marginalized as a result of their impairments. Specifically, they analyze robot care and robot-assisted care from the perspective of human flourishing and Mark Coeckelbergh's differentiation between shallow (routine), deep (reciprocity of feelings) and good (respecting human dignity) care.[3] They express concern about whether human beings will still meaningfully be in the loop as robot caregivers become more pervasive (p. 262).
The care of the vulnerable, young children and the elderly is the main topic of the chapter by Noel and Amanda Sharkey. Robot supervision could lead to a loss of privacy and liberty. For instance, in the case of a young child playing, the problem is in trusting the robot's capacity to determine what constitutes a dangerous activity (p. 272). How do we avoid robot care from becoming overly restrictive? Another issue is that robot care might come as a replacement for human contact. Studies have been done with robot pets, such as Paro, that respond interactively. Although positive effects have been reported, the authors rightly warn, "These outcomes need to be interpreted with caution, as they depend on the alternatives on offer." (p. 277).
To probe our intuitions concerning robot servants, Steve Petersen suggests considering a 'Person-o-Matic' machine, not unlike the food replicator in Star Trek, that can make an artificial person from just about any specifications, from plastics, metals, or organic matter, with potentially any kind of programmable behavior. What would we find allowable or unacceptable about creating 'artificial servants' this way, or the kinds of servants that could be created? Peterson considers several possibilities and concludes "Sometimes I can't myself shake the feeling that there is something ethically fishy here. I just do not know if this is irrational intuition . . . or the seeds of a better objection." (p. 295).
The introduction to Section 7, 'Right and Ethics', is engaging and well-framed, asking the reader the provocative question whether we could one day see a robotic 'emancipation proclamation.' In Chapter 19 Rob Sparrow considers whether a robot could be a person, which he believes would consequently guarantee a robot gaining moral consideration. Sparrow notes that our conception of personhood has been anthropomorphized to the point that being a human has become the condition to being a person. He challenges this view and attempts to demonstrate how a robot can be a person through a test called the 'Turing Triage Test.' Kevin Warwick in Chapter 20 provides a fascinating thought experiment built on research in the field of neuromorphics: He asks us to consider whether a robot with a human brain could deserve personhood. We felt that for Warwick not to take the typical physicalist-functionalist approach to psychological capabilities for personhood meant the article was a refreshing read and helped distinguish it from the abundance of articles that seem to dogmatically restate the physicalist-functionalist argument that psychological capabilities associated with personhood can be distinguished as functional neural activity rather than being tied to a specific biological state.[4]
To finish off Section 7, Anthony Beavers (Chapter 21) takes a metaethical lens to the field and considers the implications that robotic (non-biological) technologies have on an ethics derived from biological agents, and specifically the strain that robots place on these biologically derived ethical concepts. What we found enjoyable about this section is how forward looking it is. No reasonable person would currently argue that any robot is deserving of rights or should be considered a person, but this section takes a futuristic approach to these issues, teasing us with questions of the 'what if?' variety. This section attempts to push the envelope of robot ethics by going beyond the 'state of the art' of today's robot ethical issues to how we might reach the point of a right-bearing robot and the conditions for that robot to be a right holder.
The Epilogue (Section 8) is an excellent overview of the book by Gianmarco Veruggio and Keith Abney. They not only condense many of the hurdles facing robot ethics that are given throughout the book, but also almost set out an agenda of research for some of the key areas and questions that need to be asked and answered within the robot ethics field. One particularly interesting question they raise, which many writers allude to but none really seems to have properly solved, is the question of 'when does a machine become a moral agent?' (p. 353). While no one would reasonably underestimate the difficulty of finding the point at which a machine becomes a moral agent, it does seem clear that once an answer is given many of the ethical issues surrounding robot ethics like moral and legal responsibility, personhood and rights etc will be far easier to give an answer to (or at least easier to justify a position on). We would no longer be asking whether a robot should have rights or whether the robot is a person. If you can argue that a robot is a moral agent, the answer to these types of questions will hopefully be more straightforward. On the other hand, of course, answering the question about a robot's moral agency actually requires a clear and consistent position regarding many if not all of the other issues mentioned. This book is doing us a great service in bringing together so many of the right kinds of questions to ask. They are difficult, but if robot ethics is to meet the demands of the upcoming events in robot technology we need to begin tackling the questions of ethical issues that are likely to arise tomorrow, today.
Patrick Lin, Keith Abney, and George A. Bekey (eds.)

Reviewed byPim Haselager, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, and David Jablonka, University of Bristol



[1] Wallach, W., & Allen, C. (2008). Moral Machines: Teaching robots right from wrong. New York: Oxford University Press.
[2] Picard, R. (1997). Affective computing. Cambridge, MA: MIT Press, p. 19.
[3] Coeckelbergh, M. (2010). Health care, capabilities, and AI assistive technologies. Ethical Theory and Moral Practice, 13 (2), 181-190.
[4] Lewis, D. (1986). On the Plurality of Worlds. Oxford: Blackwell.
 SOURCE  http://ndpr.nd.edu

Labels: