BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Sociotechnical Implications of Using Machines as Teammates

Sociotechnical Implications of Using Machines as Teammates

Bookmarks

Key Takeaways

  • AI has become more than just a tool; it now merits consideration as an additional teammate worthy of collaboration.
  • While this increases a project’s speed and technical rigor, AI teammates bring a fresh set of challenges around social integration, team dynamics, trust, and control.
  • To increase transparency, researchers are exploring ways to embed social cues into algorithms and user interfaces. Conversely, they are also improving models of human emotion and body language.
  • Sociotechnical frameworks establish relationships and transition processes to successfully incorporate AI into teams.
  • Business analysts and behavioral researchers are evaluating methodologies to balance knowledge control and asymmetry to build trust between AI and users.

Early Adopters of AI Teammates

While many companies have adopted AI to support their technical and business goals, few have advanced to fully engaging AI as a “member” of their teams. Current chatbots and digital assistants are limited by their scope; typically, they only interact with a reduced segment of problem assessment and resolution. The true capacity of AI can be leveraged as it is brought into every step of the decision-making process: defining a problem, identifying root causes, generating solutions, evaluating and ranking outcomes, executing a course of action, and reflecting on past decisions and performance.

Although machines currently lack many aspects of human cognition, they excel in critical decision making areas. The complementary nature of human intellect with AI computing has proven successful in initial pilots of AI teaming. Medical imaging teams that relied on the expertise of doctors along with AI saw an error rate of only 0.5%, while individual teams of only doctors (3.5% error rate) or AI (7.5% error rate) fared worse on their own. In an online problem-solving game, AI helped human players increase their problem solving speed by 55%. However, even some of the most recognized leaders in tech -- including Elon Musk -- share doubts about the implications of increasingly intelligent AI. Ambitious researchers have started to zone in on the balance between intelligent, helpful AI teammates and existing human dynamics of trust and social integration.

Challenges of AI Teammates

Beyond computing challenges, potential AI teammates face additional qualifications of their technical fidelity and accuracy. AI must understand the context and broad implications of potential outcomes, a challenge even for their human teammates. They must also be trusted and accepted as worthy, valuable teammates by their human counterparts. Further, they need either the authority to act autonomously in situations that require a quick response, or an efficient approval mechanism when a human needs to be involved in a process.

To contribute meaningful results, AI algorithms must be able to effectively understand their human teammates. Human behavior is inherently difficult to model, and it’s even difficult for humans to agree on the best answer or outcome in particularly tricky scenarios. Using historical decision-making records is problematic, as it can be difficult to account for inherent biases and changing value systems over time. However, as these models improve, AI will become more capable of “common sense” understanding. There are three current approaches to capturing human values within AIthat show promise: overlapping consensus, veil of ignorance, and social choice. 

Three approaches for balancing human ethics within AI. Courtesy Iason Gabriel.

An overlapping consensus model finds and follows common points that multiple parties or ethical standards agree with. The veil of ignorance model keeps the AI or user as impartial as possible to avoid choosing outcomes that might unfairly favor one group over another. Ranking and mathematically scoring viewpoints and outcomes relies on social choice theory. All of these approaches need contextualized development and careful refinement to help a practical AI teammate make decisions, but each approach aims to balance AI’s ability to maximize utility with various potentially disparate human value systems.

Conversely, teams need transparency and accuracy from their AI systems to trust and accept their outputs. Research on robot-human interaction shows that team performance can be further increased when human teammates have emotional attachment to a robot. This study confirms the importance of positive emotion towards a technology with increased use of that technology and vice versa; the challenge steps in when determining how humans become attached to robots and AI. Their research suggests that emotional attachment can be built “either by identifying with the robots by extending their self-concept to include their robots, or by identifying with the team that includes their robots.” Just as AI needs to understand how humans think, humans need to understand how AI systems “think” -- finding common ground is key in building emotional connections and pushing hybrid human-AI teams to peak performance.

It’s easier to trust a machine in a calm, controlled environment when things are going well -- but what about when a crisis strikes? Who should be held responsible when something goes wrong? Should humans always have a chance to intervene before a decision is made or action is taken? While these are tough philosophical questions to answer, knowledge control and asymmetry is an active debate in the AI community. A promising path forward lets AI handle simpler scenarios, and alerts a user for more nuanced decision-making. This balances the best of both worlds: the AI automates tedious, repetitive scenarios, and suggests actions that might lead to the most desirable outcome to help humans better understand options for success. This again depends on our previous two challenges -- the AI needs to produce results that are relevant and mindful of the team’s needs, and the team needs transparency to justify and audit an AI’s rationale. 

Embedding Social Cues

Research has concluded that humans are more likely to accept an AI teammate, even in short-term interactions, if the AI system exhibits specific social cues or traits, such as tone, gender, age, gestures, or facial expressions. These traits are foundational to building an emotional connection between people and technology, and the addition of social cues has increased believability, credibility, trust, efficiency, and overall long-term success in human-AI relationships. Social cues can be subdivided into four groups: verbal, visual, auditory, or invisible. Cues are also dependent on context -- for example, while nodding the head up and down signals agreement in western culture, this signals disagreement in Bulgaria and Albania. To add even more complexity, social cues (like greeting, nodding, smiling, gesturing) can be combined into various social signals (like welcoming or agreeing). An effective AI teammate must display appropriate social cues so that human teammates can properly understand the desired social signals in context.

Taxonomy of social cues as applicable to AI. Courtesy International Journal of Human-Computer Studies.

On the flipside, AI must also be able to accurately read social situations. DARPA is funneling millions of dollars into research that will enable AI to understand facial expressions, interpret body language, and detect social signals in conversation. Other research teams have been able to link body language and emotional state with 98% accuracy on design teams. This emotional awareness helps AI navigate tricky team dynamics, assess patient comfort, or detect student frustration to dynamically tailor a response to a particular situation.

Structuring Teams with Sociotechnical Frameworks

A nuanced approach is necessary for employees to adopt AI as a teammate; enterprise-wide, top down approaches often fail as they aren’t accepted by or useful to front-line managers and employees. Researchers have developed a sociotechnical framework strategy for successful human-AI collaboration that can maximize sociotechnical capital, or “the competitive advantage that results from successful collaboration of AI technology and people for adopting organizations.” Like other kinds of capital, sociotechnical capital can be developed over time with deliberate investment. 

As AI systems continue growing in scope and sophistication, the roles of employees and the nature of human-AI relationships will change over time and likely increase in complexity. The six configurations summarized in the table below capture the simplest ideal scenarios that a sociotechnical framework might encompass; in reality, systems are likely to overlap multiple configurations or push the boundaries of entire new ideas.

Scaling novelty and scope as AI grows. Courtesy Journal of Business Research.  

  1. Section Ia captures automation. These AI systems have very narrow scope, and they are specialized to a specific routine encompassing low-level work. Negative attitudes toward AI are strongest in this section, as this automation may replace many front-line manual laborers. The value AI brings in this section is increased speed, accuracy, reliability, and efficiency. Humans in this area are controllers, providing inputs to the AI systems and accepting output. Interdependence and integration between humans and AI is minimal, and sociotechnical capital generated is low. As an example, this case study concludes that the application of robotic process automation (RPA) to business processes increased efficiency by 21%. The automation here was simple, with low sociotechnical capital: a virtual assistant generated payment receipts and sent one to each customer. 
  2. Section Ib represents amplified effects across several organizational functions with the introduction of AI. Humans and AI are still relatively decoupled, where the human conducts the data ingestion and output interpretation processes. In this section, the unity of AI and human effort is more effective than either on their own, thus generating moderate sociotechnical capital. This survey on predictive analytics provides many potential applications and use cases, including medical decision support systems, fraud detection, insurance processing, financial forecasting, and customer relations management.
  3. Section IIa encompasses the augmentation of humans with the power of AI. The AI and human work hand-in-hand in a complementary nature. Human-AI systems here are tightly coupled, but each maintains a degree of independence; sociotechnical capital produced is high. Robotic surgery robots have improved immenselythanks to recent innovation, but still heavily rely on their human operators.
  4. Section IIb involves AI systems that co-create with humans to create completely unique outputs. These technologies fundamentally change modern workforce and span many platforms and industries. In this section, organizations develop processes and toolsets around their human-AI technologies, as opposed to developing human-centric and AI-centric tools independently. Human-AI relationships here would be nearly inseparable, and this section brings the highest levels of sociotechnical capital. Deep learning can be useful in systems that are historically challenging to model -- for example, researchers have unleashed deep learning algorithms on the stock market.
  5. Section IIIa shows autonomous systems, which are highly capable in very specific problem areas. Human interaction in this section is to serve as ethical or legal intervention. There is moderate interaction between humans and AI in this section, resulting in moderate levels of sociotechnical capital. In this area, sociotechnical capital might be particularly fragile, as exemplified in the acceptance of self-driving cars.
  6. Section IIIb characterizes authentic AI, or black-box systems that achieve the superintelligence goal of AI. These systems would not depend on humans for input, approval, or interpretation. As these are highly futuristic technologies, the AI-human relationship in this section is mostly speculation, but would likely generate low sociotechnical capital due to the independence of each entity.

This sociotechnical framework is complete by socializing AI in a new team. Both the AI system and the human teammates must clarify their role, as defined in the 6 sections above, and begin building trust. This may require adjusting the way an AI system develops or communicates a solution for each context it operates in. The AI must sense, comprehend, act, learn, and integrate with its new environment and users. Employees benefit from a clear, early understanding of expectations and iterative, bidirectional feedback process; this is key to establishing early acceptance and maximizing sociotechnical capital.

Balancing Knowledge Asymmetry and Control

Computing systems have nearly perfect memory and recall as compared to their human counterparts. As such, this can make human team members hesitant to share their knowledge with AI teammates. When human teammates have the power to delete memory or revoke access to information upon task completion -- an ability coined as “knowledge control” --  researchers postulate that humans would be more likely to collaborate successfully with AI systems. 

The innate knowledge asymmetry between humans and AI must be acknowledged for successful collaboration. “Perceived knowledge asymmetry” refers to the self-awareness human teammates must have to trust that the AI is able to ingest and process data with higher velocity and volume than any human could. This is a familiar scenario in human-human interaction, and the same level of trust must also be present in human-AI interaction for successful teamwork.

Trust is the foundation to balancing both knowledge control and perceived knowledge asymmetry. As trust increases, humans are more likely to accept less restrictive knowledge control and effectively delegate tasks. Again, this prevails in both human-human and human-AI scenarios. There are even cases where word-of-machine dominates word-of-mouth recommendations, where users knowingly choose and actively trust AI recommendations over manually suggested recommendations.

Conclusion

AI has become more than just a tool; it now merits consideration as an additional teammate worthy of collaboration. While this increases a project’s speed and technical rigor, AI teammates bring a fresh set of challenges around social integration, team dynamics, trust, and control. To increase transparency, researchers are exploring ways to embed social cues into algorithms and user interfaces. A sociotechnical framework can help transition in AI algorithms for successful collaboration with existing human teams. Business analysts are evaluating methodologies to balance knowledge control and asymmetry between AI and users.

There are many remaining questions on AI’s trustworthiness: Do we have enough transparency into AI to build meaningful trust? When do we trust AI’s recommendations more than people’s? Are we learning to trust AI, or simply to rely on AI? Trust is a central concept across all indicators of success for human-AI collaboration -- how could hybrid human-AI teams thrive without it?

About the Author

Caitlyn Caggia is a content writer for PDF Electric & Supply. She is an experienced systems integrator and solutions architect focused on analytics and artificial intelligence initiatives. Caitlyn holds her MS in Electrical and Computer Engineering from Georgia Tech.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT