By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Outsmarting Akamai’s Bot Detection with JA3Proxy | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Outsmarting Akamai’s Bot Detection with JA3Proxy | HackerNoon
Computing

Outsmarting Akamai’s Bot Detection with JA3Proxy | HackerNoon

News Room
Last updated: 2025/07/18 at 9:19 PM
News Room Published 18 July 2025
Share
SHARE

Akamai Bot Manager is one of the most common anti-bot solutions on the market. It’s used by many high-profile websites, ranging from e-commerce sites to travel sites, and, depending on its configuration, can be challenging to bypass.

Based on my experience, the typical pattern I encounter when a website activates Akamai Bot Manager protection is that the scraper (usually a Scrapy one in my stack) hangs and times out from the first request.

But what is Akamai Bot Manager, and how can I see if a website is using it?

Akamai Bot Manager Overview

Akamai’s bot detection, like every other modern anti-bot protection software, works on multiple layers.

Network fingerprinting is one of the first: Akamai analyzes the TLS handshake and connection details (JA3 TLS fingerprint, cipher suites, TLS version, etc.) to see if they match a real browser or a known automation tool. Each browser (Chrome, Firefox, Safari, etc.) has a characteristic TLS fingerprint, and if your client’s TLS Client Hello doesn’t match any common browser, Akamai knows something is fishy.

It also inspects HTTP/2 usage and header order – real browsers almost always use HTTP/2+ these days, and they send HTTP headers in a particular order and format. If a client is still using HTTP/1.1 or has headers in a non-browser order, it’s a red flag. Additionally, Akamai looks for browser-specific headers or values that scrapers might omit; ensuring your headers (User-Agent, Accept-Language, etc.) and their order mirror a real browser’s is crucial.

Another layer is IP reputation and analysis. Akamai checks if the client IP is from a residential network, mobile network, or a datacenter. Residential and mobile IPs (the kind real users have) score high on trust, whereas known datacenter IP ranges are automatically suspect. High-volume requests from a single IP address or IP subnet will also lower the trust score. This is why successful scraping often requires rotating residential proxies – always to appear to come from different real-user locations, not a cloud server farm.

Finally, Akamai employs behavioral analysis using client-side scripts and AI models. A JavaScript sensor on the webpage collects a multitude of data points about the client’s environment and interactions (such as timing, mouse movements, or their absence, unusual properties in the browser object, etc.). Akamai’s AI models crunch this data to assign a bot likelihood score to each session. This aspect is the most challenging to bypass and often requires running a headless browser or replicating the sensor logic. (It’s beyond our scope here – our focus will be passing the network-level checks, which is the most common case for e-commerce websites, in my experience.)

But how can we detect that a website is using Akamai Bot Manager?

Apart from the usual Wappalyzer browser extension, if you notice the _abck and ak_bmsc cookies used on a website, this is the clearest sign that it’s using Akamai to protect itself.

Given these defenses, many Scrapy users have turned to the scrapy-impersonate download handler to bypass Akamai. This plugin integrates the curl_cffi library to impersonate real browsers’ network signatures.

In practice, Scrapy Impersonate makes your Scrapy spider’s requests “look” like a Chrome or Firefox: it offers TLS fingerprints (JA3) that match those browsers, uses HTTP/2, and even adjusts low-level HTTP/2 frame headers to mimic the browser’s patterns. By doing so, it addresses the TLS and HTTP/2 fingerprinting issue – a Scrapy spider with this handler can handshake with an Akamai-protected server in a way that’s almost indistinguishable from a normal Chrome browser.

Limitations of Scrapy Impersonate

While Scrapy Impersonate is a powerful tool, it comes with certain limitations:

  • Locked into Scrapy: Scrapy Impersonate is designed as a Scrapy download handler, which means it only works within Scrapy’s asynchronous framework. If your project doesn’t use Scrapy or you want to switch to a different framework (like a simple script with requests/httpx or an asyncio pipeline), you can’t directly carry over its capabilities. Migrating away from Scrapy often means a complete rewrite of your HTTP logic, and you’d lose the built-in TLS spoofing unless you implement a new solution from scratch.

  • Proxy Rotation Challenges: Using Scrapy Impersonate alongside proxy rotation can be tricky. Under the hood, it replaces Scrapy’s default downloader with one based on curl_cffi, which doesn’t seamlessly integrate with Scrapy’s proxy middleware. Early adopters discovered that HTTPS proxy support was broken because the proxy handling code was bypassed. Although fixes and workarounds (like disabling Scrapy’s built-in proxy handling and configuring curl_cffi directly) exist, it’s harder to rotate proxies or handle proxy authentication with this setup. Robust error handling for proxy failures (e.g., detecting a dead proxy and retrying) is not as straightforward as with Scrapy’s standard downloader, because errors bubble up from the curl layer and may not trigger Scrapy’s usual retry logic.

  • Maintenance and Flexibility: Scrapy Impersonate currently supports a finite list of browser fingerprints (Chrome, Edge, Safari, etc., up to certain versions). This list can lag behind the latest browser releases. You might be stuck impersonating an older browser version, which could be a problem if a target site specifically requires the nuances of a newer TLS handshake (some advanced WAFs actually check minor details that change between Chrome versions).

  • Not a Silver Bullet: Perhaps most importantly, even with proper TLS and HTTP/2 impersonation, Akamai Bot Manager can still detect and block you. For websites that have implemented a higher level of protection, checking also the browser fingerprint, any browserless configuration, including Scrapy Impersonate, isn’t sufficient for Akamai or similar top-tier bot defenses. You might get past the TLS handshake, but fail on other signals (like the absence of the expected sensor data or subtle discrepancies in headers/cookies). In other words, it’s a piece of the puzzle, not a complete solution.

The solution we’ll see today helps in solving the first two points: we’ll chain together JA3Proxy, for an optimal TLS fingerprint and a rotating residential proxy to rotate our IPs and have higher reputation scores.

Understanding TLS Fingerprints and JA3

Before diving into the solution, it’s important to understand what exactly we are spoofing. Every HTTPS client presents a unique TLS fingerprint during the handshake. This fingerprint is a combination of the TLS protocol version and a bunch of options the client says it supports – think of it as the client’s “dialect” of speaking TLS. Key components include:

  • Supported TLS Version: e.g. TLS 1.2 vs TLS 1.3. Modern browsers will offer 1.3 (while still allowing 1.2 for compatibility). Older clients or some libraries might only do 1.2.

  • Cipher Suites: the list of cryptographic algorithms the client can use, in preferred order. Browsers tend to have long lists including ciphers like AES-GCM, ChaCha20, etc., plus some GREASE (randomized) values to prevent fingerprinting.

  • Extensions: extra features in TLS, like Server Name Indication (SNI), supported groups (elliptic curves), ALPN (which is used for HTTP/2 negotiation), etc. Both the presence of certain extensions and their order matter.

The concept of JA3 fingerprinting is a standardized way to record these TLS Client Hello details. JA3, named after its creators’ initials, composes a fingerprint string by concatenating the above fields in a specific order:

JA3_string = TLSVersion,CipherSuiteIDs,ExtensionIDs,EllipticCurveIDs,EllipticCurveFormatIDs

Each list (ciphers, extensions, etc.) is joined by - and the sections by ,. For example, a Chrome browser might produce a JA3 string like:

771,4866-4867-4865-....-47-255,0-11-10-16-23-...-21,29-23-30-25-24,0-1-2

This represents TLS 1.2 (771 is 0x0303), a specific set of cipher suites, extensions, supported curves, and curve formats (the numbers are standardized IDs). The JA3 string is then MD5 hashed to create a compact 32-character fingerprint value. Security tools often log or compare the MD5 hash (since it’s easier to handle than a long string of numbers).

Why does this matter for bot detection? Because browser TLS stacks are fairly uniform. Chrome version X on Windows will always present the same JA3 fingerprint down to the list ordering. Firefox will have its own distinct JA3.

Python’s requests library (which uses OpenSSL under the hood) has a JA3 that’s totally different from any mainstream browser, so it’s easily detectable.

Anti-bot services like Akamai maintain databases of JA3 hashes: if your JA3 isn’t on the “known good” list (common browsers) or if it’s on a known automation list, you’ll get flagged. In fact, JA3 was originally created to track malware and bots by their TLS handshake, and it has become a handy tool for anti-scraping as well.

In summary, to pass Akamai’s TLS fingerprinting checks, we need our client’s JA3 to match a popular browser.

This usually means mimicking the latest Chrome or Firefox fingerprint (since those are the most common legit users on the web).

Simply changing the User-Agent string isn’t enough – we must modify the low-level TLS handshake. Scrapy Impersonate does this internally via curl_cffi (which itself leverages a special build of curl and TLS libraries to imitate browsers). But outside Scrapy, we need another way to achieve the same effect.

TLS Impersonation Proxy + Residential Proxy Chain

Our solution is to chain two proxies to make our scraper virtually indistinguishable from a real browser user:

  1. JA3Proxy for TLS Impersonation: JA3Proxy is an open-source tool that acts as an HTTP(S) proxy that replays traffic with a chosen TLS fingerprint. In other words, you run JA3Proxy locally, configure it to imitate a specific browser’s TLS handshake, and then direct your scraper traffic through it. JA3Proxy will terminate your TLS connection and initiate a new TLS handshake to the target site using the impersonated fingerprint. From the target site’s perspective, it looks like, say, a Chrome browser connecting. The beauty of this approach is that it’s client-agnostic – you can use Python requests, httpx, cURL, or anything, by simply pointing it at JA3Proxy. You are no longer locked into Scrapy or any particular library to get browser-like TLS; the proxy takes care of it.

    Under the hood, JA3Proxy uses uTLS(an advanced TLS library in Go) to customize the Client Hello. It supports a variety of client profiles (Chrome, Firefox, Safari, etc., across different versions). You can, for example, configure it to mimic the latest browsers available in the library. For our needs, we’d choose the latest available Chrome fingerprint, Chrome 133. As for Scrapy-Impersonate, the integration of the latest browsers in the library can take some time, but until this gets regularly updated, it’s not an issue.

    One thing to note: JA3Proxy focuses on TLS fingerprints (the JA3 part). It doesn’t inherently modify HTTP headers (other than those that relate to TLS, like ALPN for HTTP/2) or handle higher-level browser behaviors.
    It gets us past the network fingerprinting, which is the hardest to change, but we must still ensure our HTTP headers and usage patterns are correct. Luckily, we can manually set headers in our HTTP client to mimic a browser (User-Agent, etc.), and HTTP/2 can be achieved as long as the TLS negotiation allows it (Chrome’s Client Hello will advertise ALPN support for h2, so if the site supports it, JA3Proxy will negotiate HTTP/2).

  2. Residential Proxy for IP Rotation: The second part of the chain is an upstream residential proxy. This will take care of the IP reputation and distribution.

    The combined effect is powerful: to Akamai, your scraper now looks like Chrome 133 running on a residential IP. The TLS handshake matches Chrome’s JA3, the HTTP/2 and headers can be adjusted to match Chrome, and the source IP is a regular household. This addresses the major fingerprinting vectors at the network level.

    It doesn’t solve Akamai’s JavaScript-based challenges by itself, but this should be enough to bypass most of the websites you’ll encounter.

Setup Guide for JA3Proxy

Let’s set up JA3Proxy and chain it with a residential proxy. The following steps will get you up and running.

Install JA3Proxy

JA3Proxy is written in Go. You have two easy options: compile from source or use a Docker container. To build from source, you’ll need Go installed. Run:

git clone https://github.com/LyleMi/ja3proxy.git
cd ja3proxy
make

This should produce a ja3proxy executable in the folder. (Alternatively, you can run go build manually since the project is Go-based.)

If you prefer Docker, there’s a pre-built image on GitHub Container Registry. For example:

docker pull ghcr.io/lylemi/ja3proxy:latest

will fetch the latest image. You can then run it with docker run (We’ll show the run command in a moment.) Docker is convenient because it packages everything without needing a local Go environment.

In my personal experience, the installation was a bit of a nightmare. I was unable to make the Docker image work, as I consistently received errors when trying to connect to it, since no browser was recognized. Then I decided to build manually on my Mac, and I encountered the same errors. However, after hours of debugging, I found that I needed to update some dependencies, especially uTLS; there were conflicts in the libraries’ versions, and all this was causing issues. Anyway, I managed to install it, so if you get some errors at first, don’t give up.

Obtain or Create TLS Certificates

JA3Proxy can act as an HTTPS proxy, which means it intercepts TLS and presents its own certificate to your client.

By default, it looks for cert.pem and key.pem files for a certificate to use. If you don’t provide one, you might run it in plaintext mode (as a regular HTTP proxy) and simply ignore certificate verification in your client (not recommended for production, but acceptable for testing).

The best practice is to generate a self-signed root certificate and key, and configure your scraper to trust that certificate, so that you can intercept traffic without security warnings. You can generate one using OpenSSL, for example:

openssl req -x509 -newkey rsa:2048 -sha256 -days 365 -nodes -keyout key.pem -out cert.pem -subj "/CN=JA3Proxy"

This creates a cert.pem/key.pem pair valid for a year. (For production use, you might even use a legitimate internal CA if you have that setup, but for most scraping purposes, a self-signed is fine as long as your client knows to trust it.)

Launch JA3Proxy with a Chrome fingerprint

Now we run the proxy. If using the binary, execute a command like:

./ja3proxy -port 8080 -client Chrome -version 131 -cert cert.pem -key key.pem -upstream YOURPROXYIP:PORT

Let’s break down this command:

  • -port 8080 tells it to listen on port 8080 (you can pick another port if needed).
  • -client Chrome -version 131 selects the fingerprint profile. In this example, it uses the built-in profile for Chrome 131. You would replace these with the profile matching the browser/version you want – e.g., if Chrome 130 is supported in the latest version, you might use -client Chrome -version 130. (You can find the list of available fingerprints by checking JA3Proxy’s documentation or the uTLS library it uses. Profiles include various Chrome, Firefox versions, Safari, Edge, etc.)
  • -cert and -key specify the TLS certificate files we generated in step 2.
  • -upstream 123.45.67.89:1080 is the address of the upstream proxy. This should be replaced with your residential proxy endpoint. Important: JA3Proxy expects this to be a SOCKS5 proxy address__github.com__. If your provider gave you something like proxy.provider.com:8000 with a username/password, you can try the format username:[email protected]:8000. (JA3Proxy will parse the string and should handle authentication for SOCKS5 if given in user:pass@host:port form. If that doesn’t work, you might configure your residential proxy to be IP-allowed or use an IP whitelist feature to avoid auth, or run a local Dante SOCKS proxy that forwards to an authenticated HTTP proxy, etc. – those are workaround options.)

If using Docker, the equivalent would be:

docker run -p 8080:8080 
    -v $(pwd)/cert.pem:/app/cert.pem -v $(pwd)/key.pem:/app/key.pem 
    ghcr.io/lylemi/ja3proxy:latest 
    -client Chrome -version 133 -cert /app/cert.pem -key /app/key.pem 
    -upstream YOURPROXYIP:PORT

We mount the cert and key into the container and expose port 8080. Adjust the command to include your actual proxy credentials/host. Once this is running, JA3Proxy will be listening on localhost:8080 (or whatever host/port you specified).

A real-world use case – MrPorter.com

MrPorter.com is a fashion e-commerce website that, together with many others in the industry, protects itself with Akamai Bot Manager.

By using a simple Python request, as specified in the file simple_request.py in the repository, I encountered a timeout error, as anticipated.

import requests

URL = "https://www.mrporter.com/en-gb/mens/product/loewe/clothing/casual-shorts/plus-paula-s-ibiza-wide-leg-printed-cotton-blend-terry-jacquard-shorts/46376663162864673"

headers = {
    "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8",
    "accept-language": "en-US,en;q=0.5",
    "priority": "u=0, i",
    "sec-ch-ua": ""Brave";v="135", "Not-A.Brand";v="8", "Chromium";v="135"",
    "sec-ch-ua-mobile": "?0",
    "sec-ch-ua-platform": ""macOS"",
    "sec-fetch-dest": "document",
    "sec-fetch-mode": "navigate",
    "sec-fetch-site": "none",
    "sec-fetch-user": "?1",
    "sec-gpc": "1",
    "service-worker-navigation-preload": "true",
    "upgrade-insecure-requests": "1",
    "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36"
  }

def main():
    try:
        response = requests.get(URL, headers=headers, timeout=10)
        response.raise_for_status()
        print(response.text)
    except requests.RequestException as e:
        print(f"Error fetching the page: {e}")

if __name__ == "__main__":
    main() 

Result:

Error fetching the page: HTTPSConnectionPool(host='www.mrporter.com', port=443): Read timed out. (read timeout=10)

By using the Scrapfly TLS Fingerprint tool, we can view the results of our request at this URL. However, I was unable to find a database that indicates Python requests commonly use this fingerprint. For sure, it’s different from the fingerprint I’ve got by using Brave Browser, with the same headers and User Agent.

The order of the Cipher Suites is different, and therefore, the fingerprint will also be different.

Now, let’s start the JA3Proxy docker, without attaching a residential proxy, and see what happens.

docker run -p 8080:8080 
    -v $(pwd)/cert.pem:/app/cert.pem -v $(pwd)/key.pem:/app/key.pem 
    ghcr.io/lylemi/ja3proxy:latest 
    -client Chrome -version 131 -cert /app/cert.pem -key /app/key.pem

We got the message

HTTP Proxy Server listen at :8080, with tls fingerprint 131 Chrome

so we can use localhost:8080 as a proxy in our Python request script.

Another cause of errors in my setup was that I tried to use Python Requests to connect to JA3Proxy. After digging for a while, I found that the issue was that the request library doesn’t support HTTP/2, whereas JA3Proxy does when using a modern version of Chrome.

For my tests, I need to use HTTPX, as shown in the file request_with_proxies.py.

In this case, if I call the Scrapfly TLS API again, the first part of the JA3 string (the Cipher order) is identical to that of my browser.

As a final test, if we use this script to request the MrPorter page, we can download it without any issue.

Chaining a residential proxy

Now that we have solved the spoofing of the TLS fingerprint, we just need to rotate the IP that the target website will see.

JA3Proxy has an option that helps us in this, called upstream.

By launching the JA3Proxy command as follows,

./ja3proxy -addr 127.0.0.1 -client Chrome -version 131 -cert cert.pem -key key.pem -upstream socks5h://USER:PASS@PROVIDER:PORT -debug

we’re able to tunnel our requests using our preferred proxy provider.

Please note that you need to connect via SOCKS5, so ensure your provider supports this feature.

By checking the IP after doing this, I can see that my residential rotating IPs are in place, and I can keep downloading MRPorter pages with no issue.

pierluigivinciguerra@Mac 85.AKAMAI-JA3PROXY % python3.10 request_with_proxies.py
200 {"ip":"5.49.222.37"}
pierluigivinciguerra@Mac 85.AKAMAI-JA3PROXY % python3.10 request_with_proxies.py
200 {"ip":"197.244.237.29"}
pierluigivinciguerra@Mac 85.AKAMAI-JA3PROXY % python3.10 request_with_proxies.py
200 {"ip":"41.193.144.67"}
pierluigivinciguerra@Mac 85.AKAMAI-JA3PROXY % python3.10 request_with_proxies.py
200 {"ip":"102.217.240.216"}
pierluigivinciguerra@Mac 85.AKAMAI-JA3PROXY % python3.10 request_with_proxies.py

Conclusions

In this post, we’ve seen how to bypass the Akamai Bot Manager on the MrPorter website. The level of protection of the website is medium, so there’s no complex browser fingerprint challenge to bypass, but, in my experience, it is the most common use case when encountering Akamai on our road.

I choose to follow the JA3Proxy approach to bypassing it, so that this solution can be used in various frameworks. If you’re using Scrapy, you can always rely on Scrapy Impersonate, despite its limitations, or you can try to set the ciphers in the right order manually.


Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article When Will You See AI-Generated Content on Netflix? It's Possible You Already Have
Next Article Gurnee native is steering spacecraft to lunar landing
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Pixel Watch 4 might bulk up its fitness features to help you get ripped
News
Man dies after being sucked into MRI machine in horror freak accident
News
The best mid-range phones in 2025 reviewed and rated | Stuff
Gadget
ServiceNow’s acquisition of Moveworks is reportedly being reviewed over antitrust concerns | News
News

You Might also Like

Computing

Arch Linux AUR Packages For Firefox & Other Browsers Removed For Containing Malware

2 Min Read
Computing

The HackerNoon Newsletter: 12 Lessons from My Half-assed YouTube Channel (7/18/2025) | HackerNoon

2 Min Read
Computing

How to Compress Images With SVD and TensorFlow Core APIs | HackerNoon

25 Min Read
Computing

Comparing Custom Optimizers Using TensorFlow Core APIs | HackerNoon

23 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?