In machines do we trust?
Election Day and its declaration of winners and losers has morphed into election month—when many outcomes still await final tabulations weeks after the election. Machines got the blame for the delay in 2020, when manufacturers of voting machines came under attack by some rather vocal individuals stating without a shred of evidence that the machines were programmed to favor one candidate over another. Most troubling was the readiness of many to believe such baseless claims. In fact, a man interviewed by one news outlet stated, “Well those machines have computer chips and microprocessors,” as if to say that was proof enough of fraud. It was easy to place blame on machines especially since they are unable to defend themselves.
Thankfully there were no serious calls of mass fraud in 2022, though there were some expected technical glitches such as printers not working, typical for such a high volume of transactions. Despite the use of machines, vote counting it is still largely a human labor-intensive and time-consuming process. The fact is voting machines are not online and essentially the process of counting ballots is ultimately checked against registered voter lists, doing it efficiently and accurately. Both Republican and Democratic cybersecurity leaders have stated in the strongest language possible—there was no wide-spread fraud in either the 2020 or 2022 elections. (Paper ballots are used as a check and balance when needed.)
Despite the overall documented election success, the voting machine scenario represents one example of the growing sense of skepticism and mistrust of machines; in the past there have been several moments when the public balked at the rise of machines. Most viewed the rise of machines as a way to eliminate jobs, while more recently the focus has turned to accuracy, manipulation, ethical bias, and fear of over-dependence on automation.
Having an informed and healthy skepticism of machines and “systems” is good and is part of what a democracy should always insist on. Unfortunately voting machines took an unfounded credibility hit in 2020, that has led to many innovative ways local government have found ways to better engage citizens and better informing them how the system works—including all the built-in safeguards. However, other technologies have not fared as well such as facial recognition, traffic speed cameras, blockchain, autonomous vehicles and artificial intelligence. Here are a few examples of technologies that do raise legitimate and serious questions.
Facial recognition
When first rolled out, facial recognition appeared too good to be true where machines could recognize and identify thousands of people a minute walking by a camera. People marveled how Facebook would automatically “tag” people by facial recognition and automatically connect people. Public safety officials were quick to recognize the value in quickly identifying suspects and those who might be wanted for outstanding warrants. Unfortunately, it was soon learned that early generations of the technology often mis-identified innocent people raising alarms as non-white individuals were being discriminated at a much higher rate. States and cities began to ban or put a halt to facial recognition due to its failings. It would later be disclosed that the algorithms were indeed faulty. And while the technology has been greatly improved, it will take some time to regain public trust.
Speed cameras
Public safety officials tout speed cameras as a means of saving lives and reducing accidents due to speeding. And here too, citizens questioned its accuracy. But there was another issue. While the cameras might identify vehicles, they initially could not identify who was actually driving. And in one Washington D.C. neighborhood, citizens were becoming increasingly angry at the number of citations they were getting when they swore, they were going at the posted speed. Neighbors banned together and began documenting the citations, which ultimately led to the city’s public works department to recheck and ultimately recalibrate the camera. Had citizens not organized to challenge the city, thousands of drivers would continue be wrongly fined.
Blockchain
Blockchain technology has been closely associated with Bitcoin or other crypto currencies. However, blockchain technology serves as the platform only. Nevertheless, crypto currencies are unregulated companies that operate with their own set of rules. This highly speculative enterprise has received much attention as there have been some notable hacks and failures leading to the loss of billions of dollars and rightfully, consumer confidence along the way.
Autonomous vehicles
Driverless or autonomous vehicles have generated much publicity from cars that park themselves to cars that will ultimately be driverless and can be summoned to our location and take us to our destination using an app. But recent reports have shown them to be downright dangerous under certain circumstances. Lives have been lost and accidents have occurred raising safety and ethical alarms.
Artificial intelligence (AI)
AI technology is in its infancy but is gaining hold in many businesses and government agencies, including the military. AI is a superior set of technologies that can analyze data, seek out anomalies and patterns in milliseconds as compared what it might take humans to accomplish in days and years. AI can interpret human speech and report back such as what we can expect from Alexa and Siri.
Today most AI systems still operate as “augmented intelligence” where AI is used to supplement more efficient decision-making as opposed to “artificial intelligence” where machines start to think or act on our behalf. Here too citizens have expressed concern regarding accuracy, bias and lack of human oversight regarding automated processes. Still others worry about what they view as a looming threat of becoming a surveillance state which is seen as a threat to freedom and privacy.
Ironically of all the examples, voting machines have proven to be rather reliable due to the decentralized nature and limited role they play. Plus, there are so many checks and balances that are firmly in place coupled with a highly transparent system. But a skeptical public requires greater citizen outreach and education. In Arizona, cameras were set up so the public can watch the ballot processing in real time as a way of demonstrating how the system works. Nonetheless the other technologies or machines mentioned have not been as transparent or well understood. Facial recognition, traffic speed cameras, blockchain, autonomous vehicles and AI all suffer from similar and well-deserved concerns for transparency, accuracy and more.
Simply put, we can’t place blind trust in machines, and technology leaders must be ever vigilant regarding the technology we use and how it is perceived. We cannot take the public for granted and would be well served to consider creating technology review boards that including citizens to demonstrate a commitment to transparency and fairness.
Dr. Alan R. Shark is the vice president public sector and executive director of the CompTIA Public Technology Institute (PTI) in Washington D.C. since 2004. He is a fellow of the National Academy for Public Administration and chair of the Standing Panel on Technology Leadership. He is an associate professor for the Schar School of Policy and Government, George Mason University, and is course developer/instructor at Rutgers University Center for Government Services. He is also the host of the popular bi-monthly podcast, Comptia Sharkbytes. Dr. Shark’s thought leadership activities include keynote speaking, blogging and Sharkbytes. He is the author or co-author of more than 12 books including the nationally recognized textbook “Technology and Public Management,” as well as “CIO Leadership for Cities and Counties.”