Default

Should Digital Safety Be A Matter of Design Or Left to Settings?

A client tries to take a screenshot of a banking app but gets stopped by a security warning. This feature is meant to keep sensitive financial information safe.

On the other hand, social media platforms like Instagram and Facebook are built for sharing and connecting. Unless users set up privacy filters themselves, these platforms often allow wide access to user data. This makes users more vulnerable to cyber threats.

What do these two situations have in common? Both are using more artificial intelligence, but they still work very differently. The main difference might be about intent, responsibility, or control.

If technology can strongly protect users in one area, why does it seem so open in another?

A key question is whether artificial intelligence can be built with strong ethical limits that protect user data from the start, rather than adding them later.

Is security truly built into systems, or quietly shifted onto users?

Today, the gap between technology and users has narrowed significantly. Thanks to artificial intelligence, digital tools are easier to use, and convenience is now expected.

But as things get more convenient, systems also become more complicated. This added complexity brings more security concerns. Security is often promoted as a feature, set up, and managed, but it is not always guaranteed. Tools like privacy settings, consent forms, and reporting options help protect users, but only if people know how to use them.

This leads to another question: are users truly protected by design, or are they just expected to protect themselves?

Can AI be trained at the design level itself?

As security worries grow, artificial intelligence is playing a bigger role in defending against cyber threats. Verification and compliance processes are changing quickly. Many modern systems now use AI to spot unusual activity, find suspicious behavior, and automate protection.

Research on Ethics-by-Design shows that values such as privacy, accountability, and fairness can be built into AI systems during development rather than added later.

Similarly, the field of AI alignment focuses on training systems to adhere to standards. In the same way, AI alignment is about training systems to follow human values and avoid causing harm. But research shows that these systems still struggle to fully understand complex human intentions. It continues to rest primarily with users. Systems may verify user identity, but they do not consistently account for user or third-party intent. While it may be argued that exploration entails consequences, it is necessary to consider whether such consequences should be severe enough to cause lasting harm to the user experience.

This raises a deeper question: can artificial intelligence be designed from the start to detect and address harmful or illegal intent or predict possible misuse? Can the focus of security measures be shifted towards the perpetrators’ perspective to prevent mishaps, rather than just protecting victims after the act?

This is not just a theoretical issue. Studies on AI misuse show that people can get around safeguards by using carefully worded prompts that hide harmful intentions. Large studies of real-world prompts show how easily these protections can be bypassed.

This highlights a big problem: while today’s AI systems are good at spotting patterns, they are much less reliable at understanding intent, especially when it is hidden or subtle.

Work has already been initiated to tackle these challenges. One growing area in AI safety is ‘red teaming,’ where systems are tested against simulated attackers before release to identify weaknesses.

This shows a move toward proactive design, where risks are anticipated and planned for rather than fixed after the fact. Methods like modular oversight aim to keep AI systems ethical throughout their whole life, not just treat safety as an afterthought.

These discussions are no longer limited to research labs or cybersecurity circles. They are increasingly becoming part of broader public and policy conversations around digital freedom, governance, and accountability.

One such platform is the upcoming Think Twice Conference, which will bring together policymakers, technologists, researchers, and civil society voices to examine the relationship between artificial intelligence, governance, and digital rights. The conference itself revolves around questions that closely mirror the concerns raised here: how can AI strengthen governance while protecting digital freedom, and how can digital freedom shape the governance of AI? 

In many ways, the growing relevance of such forums reflects a broader reality: the conversation around AI safety is no longer just about innovation, but about the kind of digital society we are collectively designing.

Still, most systems today operate reactively, as seen in cybersecurity. They respond to misuse or breaches rather than preventing them before they occur.

For instance, the recent debate over age-verification laws clearly reflects this tension. Governments and digital platforms are increasingly exploring AI-backed age estimation and biometric verification systems to prevent minors from accessing harmful online spaces. Supporters believe that such systems represent a proactive step toward digital safety. Critics, however, argue that these measures would normalize surveillance, expand data collection, and create new privacy risks.

This debate highlights another underlying layer of the issue: can systems designed to prevent harm do so without compromising the very freedoms and privacy they are meant to protect?

Most modern systems focus on helping victims. They protect data after it has been exposed, fix problems after misuse occurs, and add filters only after harm has occurred. Hence, the loop of damage and repair continues.

Since misuse is not just possible but often expected, we need to ask whether systems should continue to be designed this way.

Until we solve this, ethical artificial intelligence might be judged more by the threats it misses than by the ones it stops. At this point, the idea may seem hypothetical or even unreasonable. But the history of innovation has repeatedly shown that solutions often emerge from perspectives, questions, and ideas that once seemed far-fetched. After all, isn’t that how progress begins?”

********************************************************************************************

When you come to think of it…

We are perhaps at a juncture where conversations about AI ethics, transparency, fairness, governance, AI regulation, and digital freedom matter more than ever.

As mentioned before, spaces such as the upcoming Think Twice Conference seek to bring these questions into public dialogue, bringing together diverse ideas to debate how digital systems should evolve in the years ahead.

If these questions resonate with you, perhaps it is worth saving the date or even contributing your own perspective to the discussion.

More information about the conference and speaker submissions can be found here:

Think Twice Conference – Call for Speakers

0 comments on “Should Digital Safety Be A Matter of Design Or Left to Settings?

Leave a Reply

This website generates anonymous visitor statistics to measure user engagement. You agree by using the website further.

Privacy policy