BT

InfoQ Homepage Articles Why Should We Care about Technology Ethics? The Updated ACM Code of Ethics

Learn practical ML skills you can use immediately. Sign up for a Learning Path. November 14-15, 2019.

Why Should We Care about Technology Ethics? The Updated ACM Code of Ethics

Bookmarks

Key Takeaways

  • The ACM Code of Ethics and Professional Conduct was updated in 2018 to respond to the changes in the computing profession since 1992. 
  • The Code is aimed at aspiring and practicing computing professionals and expresses the conscience of the profession.   
  • Ethics is important for all size companies and practitioners to consider - the general public, employees and other stakeholders are now expecting technology companies to have a higher responsibility to the public good. 
  • People developing AI technologies in particular have a higher responsibility, as the uncertainty of the methods and applications of machine learning could lead to public distrust of such technologies, especially as they are integrated into infrastructure. 
  • Applying the Code to your daily work doesn’t mean just reading it once, but considering it holistically, especially when decisions during the innovation process need to be made. 

The 2018 rewrite of the ACM code of ethics and professional conduct has brought it up-to-date with new technologies and societal demands. This code supports the ethical conduct of computing professionals through a set of guidelines for positively working in the tech industry.

The ACM Code of Ethics and Professional Conduct aims to help professionals reflect upon the impact of their work and act responsibly:

The Code is designed to inspire and guide the ethical conduct of all computing professionals, including current and aspiring practitioners, instructors, students, influencers, and anyone who uses computing technology in an impactful way. Additionally, the Code serves as a basis for remediation when violations occur. 

The code of ethics is divided into four sections.

Section 1 General Ethical Principles states that a computing professional should contribute to human well-being, avoid harm, be honest and trustworthy; be fair and take action to not discriminate; respect the work required to produce new ideas, inventions, creative works and computational artifacts; and respect privacy and honor confidentiality. These principles provide the fundament for ethical conduct.

Section 2 Professional Responsibilities explores the responsibilities of computing professionals related to the work that they do. Examples of these responsibilities are striving to achieve high quality; knowing and respecting existing rules pertaining to professional work; accepting and providing appropriate professional reviews; fostering public awareness; an understanding of computing, related technologies, and their consequences; and designing and implementing systems that are robustly and usably secure.

Section 3 Professional Leadership Principles guides individuals who have a leadership role. Examples of principles mentioned in this section are ensuring that the public good is the central concern during all professional computing work, managing personnel and resources to enhance the quality of working life, creating opportunities for members of the organization or group to grow as professionals, and recognizing and taking special care of systems that become integrated into the infrastructure of society.

Section 4 Compliance With The Code ensures commitment to the code by stating that a computing professional should uphold, promote, and respect the principles of the code and treat violations of the code as inconsistent with membership in the ACM.

The previous version of the code dates back to 1992. One of the main reasons that the ACM decided to update the code of ethics was that the internet and many other new technologies have come along since. These new and emerging technologies have affected society in completely new and interesting ways that weren't expected when the original code was written. 

Some of the bigger changes to the code include adding new sections requiring security by design, companies whose services become infrastructure (e.g. Google) to have a higher level of responsibility for serving the public good, greater duty of care over the end of life of systems, and specific aspects to address the concerns around machine learning. 

Clauses have been re-written, as, for example, the nature of the intellectual property has changed since 1992, and positive aspects of the code have been highlighted; not just what professionals shouldn’t do, but what they can do to make their technology development better. 

Another major change has been the focus of the Code - originally it was around professionalism and quality - now it’s around developing technology with the public good, the paramount concern. This helps to address some of the dilemmas involved in conflicting values within the Code - whichever value upholds the public good more should be the one prioritised. 

Catherine Flick, member of ACM Committee on Professional Ethics, spoke about why we should care about technology ethics at QCon London 2019. InfoQ interviewed her about what makes ethics important, how to apply the code for artificial intelligence and machine learning applications, corporate goals and ethics, and the role of regulation and certification in ethics.

InfoQ: What makes ethics important for large and smaller companies?

Catherine Flick: A lot of companies are starting to see the potential fallout of not thinking about ethics, for example, the impact that the Cambridge Analytica scandal has had on Facebook; Google’s employees pushing back against unethical practice within Google, etc. They are also seeing, however, the potential for appealing to a more ethically-minded market that cares about their privacy, security, and data, for example, and is becoming more vocal about their needs, rather than accepting the latest technology foisted upon them. 

InfoQ: How can we apply the code of ethics for artificial intelligence and machine learning applications?

Flick: Holistically, of course! While there is direct mention of machine learning in Principle 2.5, that doesn’t mean you should just skip to that part and only look at what it has to say. There are aspects of every principle in the Code that contribute to understanding and working through the ethical issues with machine learning. In fact, I challenge you to find a principle in sections 1 and 2 that you can’t apply to the world of machine learning! And even those in section 3 can apply to managing a machine learning business or project. To actually use the Code though, I suggest you carefully consider each principle in the context of your work, which stakeholders might be affected (and not just direct stakeholders!), identify possible future ethical impacts of your work, review possible actions you can take to mitigate or solve the problem, and then to look at the processes in place within your organisation to enable you to prevent future similar problems. 
That said, specific mention of machine learning is made in Principle 2.5, which talks about risk. In the guidance to the principle, it states:

Extraordinary care should be taken to identify and mitigate potential risks in machine learning systems. A system for which future risks cannot be reliably predicted requires frequent reassessment of risk as the system evolves in use, or it should not be deployed. Any issues that might result in major risk must be reported to appropriate parties.

Put simply, if you can’t predict what your system will do, and you can’t or won’t monitor it, you shouldn’t deploy it. A good example here is Microsoft’s TayBot, which continued to operate long after it was producing problematic content. However, this needs to be understood in the context of the rest of the Code - why was TayBot so problematic? It was violating other aspects of the Code as well, particularly in terms of discrimination, quality of work, and avoiding harm. This is why it’s important to not just look at the principles that directly reference your project, or just read the Code once and be done with it. 

InfoQ: How do you respond to someone who says the goal of a corporation is to return value to its shareholders, not to be ethical, not to do social good?

Flick: The original purpose of business is to serve society. If you don't serve society it’s less likely that someone will buy your product. And these days there's a been huge push from society towards requiring more ethical business practices. We've also seen pushback from employees within several well-known large companies when it comes to ethical issues, so there’s internal as well as external push for more ethical technologies. We're seeing these sorts of demands for more environmental considerations, more sustainability considerations, and more concern for the societal impact of technologies, too.

People are worried about their data, they're worried about their privacy, they're worried about their kids, they're worried about all kinds of ethical issues that impact them. The fact that a lot of these companies have been able to operate in a relatively grey area for so long has meant that we've actually seen where these cases can go. There's now demand for governments to regulate more heavily, as can be seen with the GDPR. So we are seeing a demand for ethics in technology - probably also why all the large consulting companies have suddenly sprouted “Responsible Tech” services! 

InfoQ: What's your view on the role of regulation and certification in ethics?

Flick: It’d be very difficult to certify professionals in the technology industry. Unlike, say, civil engineering or medicine, there is a huge range of professions who are incorporated into the tech industry, and you can “do tech” from a very early age and with easy to access tools and skills. Anyone who wanted to set out to require certification would need to narrow their scope quite a bit, which is definitely possible - perhaps if you were to work on high level infrastructure you might need to be certified to a certain level of competence. Certainly, there is a lot of certification available in the field - but it’s largely specific to job roles and used to push certain approaches to, for example, project management or network security. The Code is also aimed at aspiring professionals, however - we want students, self-taught techies, and anyone else who uses computers in meaningful ways to take note of the Code and what it has to say about what they are creating - i.e., what is acceptable to the profession. We’ve had new members join the ACM because of the Code - they like what they see as being a kind of “oath” they can stand behind and use to guide their work and to hold up to their employers if they are being challenged to do something unethical. Having the weight of the largest professional organisation behind them certainly helps - as we’ve seen in the case of Google employee pushbacks. 

Regulation is another matter - and we know from many examples that regulators often don’t get it right. It’s important for computing professionals to be involved in helping governments decide what regulations to put in place, and what impacts they might have; ethically, however, they should not push for organisationally (or personally) valuable regulation, but for regulation that promotes the public good. Certainly, the role of the ACM is significant here too - ACM members must abide by the Code of Ethics, and if they violate it, and are reported for that violation, they can be potentially removed from the ACM (along with other, lighter sanctions depending on the outcome of the investigation). As the Code represents the conscience of the profession, it should be undesirable to any professional to be removed from such an organisation. 

InfoQ: Where can InfoQ readers go for online resources on ethics?

Flick: Most of the useful online sources are pretty specific. Next to the ACM code of ethics, there’s the IEEE/ACM Software Engineering code of ethics. Approaches like “responsible innovation” are coming into fashion in Europe, with tools like the Responsible Innovation self-check tool aimed at companies developing technology as well as roadmaps for some sectors which look at shaping ethical innovation in these sectors. Case studies and other papers are available at Orbit, which looks specifically at responsible innovation in ICT in the UK.

Ethics of AI is a big sector at the moment, with many organisations coming out with their own guidelines for responsible/ethical AI. I believe some of the best work is being conducted at the Alan Turing Institute on ethics.

About the Interviewee

Dr. Catherine Flick is a Reader in Computing and Social Responsibility at the Centre for Computing and Social Responsibility at De Montfort University. She is a member of the ACM’s Committee on Professional Ethics and was a committee member of the ACM’s Code of Ethics update team. She teaches research methods and computer ethics, and hosts a podcast on ethics and video games called “Not Just A Game”.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT

Is your profile up-to-date? Please take a moment to review and update.

Note: If updating/changing your email, a validation request will be sent

Company name:
Company role:
Company size:
Country/Zone:
State/Province/Region:
You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.