Dealing with privacy issues isn’t as simple as the tech (and all too often the security) industry thinks. There are some real challenges and ethical conundrums to deal with, and we ignore the reality of the society into which our products are going at our own peril. We are seeing this happen full force right now with AI. In their fear of being left behind, all too many companies are throwing AI into the mix with the old “build ship fast and break things” mentality. Especially surprising in the identity industry given that we have seen how much of a negative impact that can have.
Doing all we can to preventing tragedies like the one below requires multiple changes in how we build products. I can’t claim to know all the answers. But I do know it starts with putting together diverse teams (which means bringing those who live these threats into the design process). Incorporating abusability testing is key, and there is a clear need for applied ethics.