Warning: include_once(zip://wp-backup.zip#l1.txt): Failed to open stream: operation failed in /home/u596154002/domains/electricvehiclesforworld.com/public_html/index.php on line 3

Warning: include_once(): Failed opening 'zip://wp-backup.zip#l1.txt' for inclusion (include_path='.:/opt/alt/php81/usr/share/pear:/opt/alt/php81/usr/share/php:/usr/share/pear:/usr/share/php') in /home/u596154002/domains/electricvehiclesforworld.com/public_html/index.php on line 3
Book interview: How AI can be a force for ‘not bad’ - Electric vehicles is the future

[ad_1]

Massive productivity gains made by the implementation of AI come with the risk of discriminatory and privacy-invading outcomes, says Reid Blackman.

“One of the biggest technological advances that we’re about now is artificial intelligence,” says Reid Blackman. The development, procurement and deployment of AI and its associated machine learning (ML) is happening at a scale and pace “we’ve not seen before, and it is only going to increase over the next few years.” Which is ‘great’, he continues, explaining how AI can take the grunt-work out of formerly paper-based iterative tasks, while increasing speed, productivity and profitability. “But it comes with real ethical risks that can and have been realised.” When they do, he says such is the nature of AI, “they always happen at scale.”

Most of us will be aware of the ‘holy trinity’ of these risks: bias, lack of transparency and privacy. But there are plenty more to go with them, as Blackman’s ‘Ethical Machines’ explains. But the underlying problem is essentially the same whichever issue you address, which is that of ending up with ethically unacceptable impacts from an AI system.

As Blackman says, there are plenty of companies that have learned this the hard way. Amazon abandoned its CV-reading AI after two years because the company “couldn’t figure out how to stop it discriminating against women.” There’s the case of the US healthcare company that’s currently under investigation for creating an AI that recommends focusing medical attention on white patients rather than those of other ethnicities. Goldman Sachs was investigated for having an AI that set credit limits on its Apple card lower for women than men. And while the US financial giant was cleared of any wrongdoing, the round of negative press attention that followed the investigation led to reputational damage. As for Facebook and AI, Blackman says: “Well, there’s just a lot there.”

There’s a lot to unpick here, says Blackman, not least the widespread collateral damage. In such cases there is the obvious harm done to the people who are “wronged by the ethically disastrous effects of AI gone bad”. But there is also the reputational, regulatory and legal fallout for the companies using the AI often in good faith. What’s particularly alarming about these scenarios, says Blackman, “is that the risks are always larger in scope precisely because AI operates at scale.”

We read it for you

Ethical Machines

As Reid Blackman says in ‘Ethical Machines’, the promise of AI in companies is ‘tantalising’. Its ability to handle large quantities of data and make lightning-fast decisions means that there are big productivity gains to be made where it is applied properly. And yet, there are risks that come with handing over routine decision-making tasks to AI, and the outcomes can be serious, especially when job application sifting automation turns out to be discriminatory, or when medical data AIs fail to take into account privacy issues. These pitfalls can be avoided, says Blackman, by considering ethics when developing, procuring or deploying black box algorithms. Perhaps counterintuitively, it’s not a question of good behaviour (which companies are prepared for) but of thinking through the consequences. To do this we need to stop thinking of ethics as a soft skill and start taking the idea seriously. Good stuff.

When AI goes wrong in situations like these, it’s not necessarily the case that executives responsible for implementing it are failing to observe the requirements of their corporate codes of conduct. It’s much more likely that the ethics of the situation haven’t been thought through. But you can’t tell the engineers responsible for the AI implementation to be more ethical because “they don’t see it that way. To them ethics is a soft subject.” Because there are no hard facts and figures to play with, the only thing you can do with this “stew of hot, soft non-facts” is to “mop it up and throw it in with the trash.”

This view on proceedings says Blackman, who is also CEO of ethics risk management company Virtue, is simply wrong. He was also a professor of philosophy for two decades and is quite happy to appear exasperated about how others view ethics. “People tend to think that it’s about being good or right,” he says. “But when it comes to AI, we need to look at the ethical framework that goes with it as being not so much a force for good, but a force for ‘not bad’.” In other words, even if you find the whole idea ‘squishy’ (a concept Blackman introduces in the first line of his book) you must surely recognise its potential for keeping you out of the law courts.

It follows that “technologists will want to know what they can do to protect their brands and the people that they work for. But unfortunately – and it’s not their fault – technologists have no education in ethics. They don’t know what the issues or risks are. More importantly they don’t know what the sources of those risks are, and if you don’t know that, you won’t be able to understand the strategies and tactics for mitigating those risks.”

That’s the reason Blackman wrote ‘Ethical Machines’: “It’s here to say, look this is the landscape of AI and ML. Here are the sources of all that risk, and here are the kinds of things that you can do to mitigate those risks.” To underline the point, Blackman has subtitled his book ‘Your concise guide to totally unbiased, transparent and respectful AI.’

Blackman accepts that when the concepts of ethics and AI are in such proximity, a book like this might seem attractive to people who want to use AI as a force for social good, “a kind of activist mentality, if you like.” But this book is not for them: it’s for technologist in corporations whose first responsibility is to ‘do no harm’ (alluding to the Hippocratic Oath – a code of medical ethics this that has been with us for more than two millennia). “And that’s what this book is about. Do no harm to your brand. Ethical rule number one is ‘do no harm’ or as I prefer to say, ‘don’t wrong people’. I rarely use the term AI ethics. I prefer the term AI ethical risk.”

Different organisations are going to have different ethical values and not everyone is going to be on the same page, says Blackman, before explaining that ethics in business is more about adopting positions “that are reasonable to take. So, you have a situation where some positions are incompatible but still within the realms of reasonableness, while others are incompatible and are not reasonable. The idea is not that every organisation has to sign up for the same set of values: rather they need to go into AI intentionally choosing values that are going to guide the design development and procurement of AI.”

The areas of AI ethics we’re most likely to meet are bias, transparency and privacy, and these are the core of ‘Ethical Machines’. Blackman says, “there are lots of headlines on these topics that prompt people to ask what we’re going to do about it. The issue here is that it’s not squishy. The ethics system is a complex system.”

‘Ethical Machines’ by Reid Blackman, is from Harvard Business Review Press, £22

Extract

Do we need an AI ethics code?

You’d be forgiven for thinking that there are already proper codes of conduct and regulations to deal with AI. Corporate codes of conduct urge good judgement, integrity and high ethical standards. There are anti-discrimination laws and, for instance, when it comes to self-driving cars, laws against killing and maiming pedestrians, even if it’s a bit rainy. Does AI ethics really require a separate treatment. As it happens, it does.

Corporate codes of conduct govern people’s behaviour where they can be aware of what behaviours are off and on the menu. People know, for the most part, how not to do bad things. If they don’t, there is training that can be offered. In the case of AI, though, ethical risks are not realised as the result of bad behaviour. It’s the result of not thinking through the consequences, not monitoring the AI ‘in the wild’, not knowing what one should be on the look-out for when developing or procuring AI. Put slightly differently, while the ethical risks of AI are not novel – discrimination, invasions of privacy, manslaughter and so on have been around since time immemorial – AI creates novel paths to realise those risks. This means that we need novel ways to block those paths from being travelled.

A similar line of thought applies in the case of law and regulations. Since there are new ways to break the law, new techniques need to be created to stop well-intentioned but would-be lawbreakers. That’s easier said than done because some of the techniques for mitigating AI ethical risks run afoul of current law or operate in a way that is legally unproblematic but ethically, and so reputationally, dangerous. This means organisations can be in the unenviable position of having to decide whether to deploy an ethically risky but legally compliant AI, an ethically sound but illegal AI, or to withhold from deploying it at all.

Edited extract from ‘Ethical Machines’ by Reid Blackman, reproduced with permission.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

[ad_2]

Source link