Artificial intelligence (short: AI) is one of the strongest drivers of digital transformation. The basic idea: machines controlled by algorithms complete certain tasks faster and more accurately than humans, who then have more time for other activities. AI is also increasingly being used in recruiting and employee development. According to a study by the German Association of Human Resource Managers (Bundesverband der Personalmanager – BPM) and the ethics advisory board HR Tech (Ethikbeirat HR Tech), around 30 percent of companies are already using AI-based technology in HR or plan to do so soon. According to the survey, the most popular areas of application among HR managers are the optimization of job advertisements and career pages, the analysis of resumes, and the use of chatbots, for example, to direct potential candidates to suitable offers within the company. In HR development, skill matching, interest-based job recommendations, and the internal networking of employees have gained importance in recent years. Dedicated jobs and roles have emerged for “people analytics,” the targeted evaluation of data collected through the interaction between people and technology in the company. It can provide valuable information, for example, about existing skills, needs, and learning interests in the company and thus serve as a basis for decision-making in the HR department.
Between seal of approval and law
At the same time, the increasing use of AI is causing unease and uncertainty among many people. With big data comes big responsibility – and the question of how much decision-making power we want and should give to algorithms. At what point is it useful to cancel out the “human factor” in favor of (supposedly) objective decisions? And where is the “human factor” indispensable? Are humans and machines, or in other words, biases and machines, inseparable? And what if self-learning machines eventually become so good that they surpass and manipulate the humans who created them? These questions are becoming audible but at different volumes. Primarily tech startups are enthusiastically using the new opportunities offered by digital technologies, for example, to change the (working) lives of as many people as possible for the better. They choose pragmatic approaches to meet their responsibility in dealing with data as best they can, for example through strict self-imposed commitments. The louder voices calling for regulation, on the other hand, are coming from the work councils of companies, among others. According to the study mentioned at the beginning of this article, 86 percent are in favor of legal regulations to control the use of AI in organizations.
But are laws really the only worthwhile solution? And aren’t they already too late? Isn’t the current reality already 20 years ahead of the level of knowledge of politicians? And would regulation stifle innovation? “A company like Apple, which was founded in a garage, could not come into being in Germany simply because of various garage use regulations,” writes Veit Dengler in his article for Der Standard. On the one hand, he points to the achievements in terms of quality of life that the strict regulations imposed by the EU have brought us, and at the same time warns against over-regulation as a setback on the path into the future.
“Regulations should focus on the potential of AI instead of on dystopian scenarios of the future”, agrees Nicole Formica-Schiller, board member of the Federal Association of AI. This association, under whose umbrella mainly tech startups have come together, launched a seal of approval for AI in 2019, with which those who are certified want to impose clear rules on themselves in dealing with AI. A year later, the HR-Tech Ethics Advisory Board also published guidelines for the responsible use of employee data. A 2021 survey among work councils showed that while 80 percent would like to see corresponding guidelines, only four percent of work council members were aware of the existing recommendations. And one fundamental question remains: Who monitors compliance with the guidelines? Who has the necessary knowledge to penetrate increasingly complex AI systems? How can legal certainty be created in areas whose future development is hardly predictable because technology is decades ahead of politics?
Bringing artificial and emotional intelligence together
Self-imposed commitments are an important first step. At the same time, companies can make use of another powerful resource to ensure that they aren’t forced into a wait-and-see mode during transitional phases like the current one: their corporate culture. They can move forward boldly and create the best possible conditions to help shape technological change. How?
- Instead of letting themselves be guided by fears, they can first focus on the wonderful opportunities for more humanity that new technologies offer. Yannik Leusch, People Analytics Lead at Kienbaum, has addressed this elsewhere on our blog: “(…) Unfortunately, it is still often the case that when people analytics or data-driven HR is mentioned, many people reflexively think of intransparent algorithms that make automated decisions about employees. (…) We would like to plead for a new employee-centered, transparent, self-determined, and responsible handling of HR data, because data can massively improve the content of HR work as an additional basis for decision-making in the interest of the workforce.”
- Already knowing all the answers is a claim that cannot be achieved. Instead, companies should ask the right questions now and work with employees and customers to find the right answers step by step. All questions, answers, and open ends could be made publicly available so that users can see for themselves where the company stands in terms of AI and data use. A kind of public registry for dealing with AI.
Examples of questions:
- How confident are the decisions made by your algorithmic system?
- Are there any groups that might be advantaged or disadvantaged by the algorithm you are developing in the context in which it is being used?
- What is the potentially harmful effect of uncertainties and errors on different groups?
- Who is responsible if users are harmed by the product?
- What is the reporting procedure and recourse process?
- What are the sources of errors and how will you mitigate their impact?
- How much of the data sources can companies disclose?
- Companies should refrain from leaving it up to the “nerds” to deal with AI. They need an open discussion about the responsible use of algorithms and data across the entire workforce. And dedicated roles to drive, moderate, and document this discussion. People who operate at the intersection of technology and culture. A “Chief of Technology and Ethics,” for example.
- Time for reflection should have a fixed place in the company. Employees need time and safe spaces to ask (critical) questions and deal with their prejudices. Regular BarCamps, such as our “Open Wednesdays”, which could be held regularly exclusively on the topic of AI, are a great way to involve all employees in the discussion, gather a wide variety of perspectives and arguments, and identify blind spots.
- Companies should allocate more time for internal learning in their staff planning. The issues that employees have to deal with are becoming increasingly complex. If HR employees are to use AI, they must learn to understand the technology and acquire new skills in evaluating data. The same applies to members of the work council. The everyday working life of the future will be largely determined by (paid) learning. What is needed here are appropriate signals from corporate management and new, open, and flexible structures in which this is possible.
- Diversity is more important than ever before. A workforce characterized by diversity allows as many experiences, skills, and perspectives as possible to be included in the discussion about ethical data use and (unconscious) biases to be counteracted.
- Job sharing should become the norm in all areas of the company. The “four-eyes principle” ensures that decisions are always shaped by at least two perspectives. Other learning and networking formats, such as mentoring, job shadowing, or peer learning, promote a culture in which one’s point of view is not regarded as the measure of all things. Not for oneself, nor for other people, and not for machines.
Developing and using AI ethically can only succeed if we work together with all stakeholders from business, science, politics, and civil society. Companies can take on the leading role of constantly testing out new terrain in the race for the best solutions, ethically supported by an open and deeply human corporate culture.