Online privacy is one of the most prominent reasons for debate. Between serious offenses like personal data leaks from banks or insurance companies and smaller instances like annoying calls from salespeople conveniently offering you a service you need, feeling like your information is protected has never been more important. That’s not even considering dangerous situations like credit card theft.
We are constantly reminded that we are monitored, and eventually, we realize that even the bots that keep following us in ads from one website to another are a violation of our privacy.
Of course, keeping our secrets is important, but privacy is a complex matter that occasionally involves violations of law. The dark web is a hidden space for said violations, and it certainly needs more strict monitoring.
Instead of delving into the principles of the dark web, I will focus this article on the modern trends in restricting illegal activities and the possible application of AI.
Enhancing The Process
Whenever the dark web comes up, two other terms usually come to mind immediately—Tor Browser and VPN services. Dark web dwellers use both, and both are prominently targeted to monitor it. However, as governments around the world attempt to tighten the leash, VPN and Tor also change. When they get blocked, newer, more inventive, and powerful tools take the stage—all in the name of protecting privacy.
It might sound like sci-fi, but this technology has been used for years now to bypass China’s “Great Firewall.”
It has proven itself successful if we consider that it keeps developing and growing. The beauty of it is that a regular algorithm has no way of detecting anything. Can AI fare any better, and how?
Regular tools would be tracing the connection. Since that part of the issue is well covered, a malicious actor would be happily hidden from the all-seeing eye. However, truly understanding whether we are dealing with someone who is bypassing restrictions or an average user would require a deeper analysis of behavior rather than connection.
For instance, if a user is viewing an online tech store, what red flags would a human be looking at? The time spent on a page, the time spent on the website as a whole, and the amount of traffic that comes and goes. All of this data would raise reasonable questions surrounding how much time it takes to look at the items in a single store, choose one, and buy it. Answering these questions can inevitably show whether you are likely dealing with someone who is covering up their online activity.
This is the process a human would use, and it’s something AI can imitate.
AI Does As Humans Do
Unlike humans, who would need an incredible amount of time and effort to detect unusual behavior, AI can be incorporated into existing tools to do it faster and more efficiently. AI capabilities allow it to review a set of parameters. It won’t just consider how much time a person spent on a specific page but how much traffic there was between them as well as what the key differences are between this given behavior and the standard behavior of a visitor of the website in question. Basically, the work would require a personalized approach, and AI can deliver just that.
Yet, we can’t be sure this potential path for AI is without challenge. The biggest one so far is what I covered in the very beginning—the rapid development of privacy services. It’s a natural process, a cat-and-mouse, and it’s inevitable. One side would come up with more complex blocking patterns, and the other would respond by tweaking its approach.
Here, it’s more reasonable to bet on the expertise the developers possess simply because, at this point, they know the playing field very well. In a way, both sides react to each other’s moves, and the blocking side has governments on its side. The opponent has to deliver creativity and innovations.
Still, for AI to become a permanent solution, there has to be a precedent. AI is already used to detect rule violations on social media, yet we encounter this technology abusing its own power. This is the issue that needs to be addressed before we talk about implementing AI for a serious task such as monitoring privacy.
The final instance with a deciding vote should probably remain with humans, but we simply don’t have enough resources to manage these systems. Big tech companies are certainly looking into managing the risks associated with growing AI usage. OpenAI finds an answer in building an “automated alignment researcher,” which is essentially a super-AI that oversees other models and their training. Eventually, we will see what comes out of this initiative, whether it ends up being a win or a failure.
Using AI for detecting illegal online activities is becoming more and more essential, and I believe we will soon see an emergence of new AI-powered instruments that address the issue.
Follow me on LinkedIn. Check out my website.