Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there a reason why the crawling and browser automation people don't just patch the browser to be controlled with no possibility of detection?

The web page is heavily restricted in what is can access through various interfaces and you can feed it anything you want by patching the browser. Once you do that the problem becomes just simulating a legitimate user to a sufficient degree.

I wonder if that's what's already happening with CDP and ReCAPTCHA and hCaptcha - the two services mentioned that are strong and a problem. Are they detecting the "Stealth" or is it just the lack of user activity and reputation? Is CDP by itself detectable by some means?



Patching chromedriver is a lot easier than patching the browser. Plus, if you're just using a regular Chrome browser for the automation, then there's nothing to patch. Automated CDP calls aren't detectable if they don't leave any trace of automation activity. However, since Google created CDP, they might have ways of detecting automated CDP in ways that other services cannot.


What about faking mouse movement from inside the browser? PyAutoGUI is not the right way to be doing this for interacting with JavaScript that has no hope of interrogating user operating system GUI interactions.

And it seems like it would be important to try and adopt user-like mouse movement since JavaScript has access to this information.


PyAutoGUI is the optimal tool for clicking things inside of closed shadow-root elements, which are hidden to JavaScript. Can use CDP for clicking other elements.


The reason in my experience is that there's a high barrier of entry for most devs when it comes to setting up an environment for Chromium and a workflow for patches that still allows you to quickly and easily pull in and apply upstream changes whenever a new Chromium version releases.

In reality, if you know how to use CDP correctly and you have control over the environment that you run the browser in, you have to make very few browser patches.

What I mean with using CDP correctly is that, yes it is detectable to a certain extent but it comes down to things like enabling Runtime domain for example which you can easily mitigate in your own solution but is something that libraries like puppeteer / playwright often do out of the box (this is where the "stealth" versions of these libraries come in, they will either mitigate by disabling features or use some hacky approaches to instrument the JS that runs on the pages).

Then when you move into an environment that is a lot more stripped down (let's say from your home machine to docker) now you run into A LOT of issues that you definitely are better off fixing with browser patches, however figuring out what those issues are and how to fix them is a huge feat in itself and often will require you to have the ability to reverse engineer things like Cloudflare, Akamai and other anti bot vendors just to know what leaks you still have to patch.

It doesn't help that there is no end to misinformed articles on things like "browser fingerprinting" that you encounter when you try to solve your issues the first time you encounter them, a lot of articles based on nothing but superstition, articles that basically say "proxies are never good enough", "captchas are getting out of hand" that get things wrong and will just eat away at your sanity while trying to debug issues.

This is long enough of a rant already but maybe offers you some insight, if you have any specific questions feel free to ask.


Why not create a library that you inject into the Chrome process though?

It seems to me that playing a cat and mouse game with these anti-bot systems is unnecessary. Design a system which mimics a legitimate user to such a degree that it's either indistinguishable from an actual user or would produce an unacceptable level of false positives for the detection system. This is not an even playing field, the bot has all the advantages.

For example:

- Enumerate all the possible ways in which the webpage can glean insight into user input/activity.

- Hook all these functions by injecting code into the browser. At a level above and completely inaccessible to anything the web page can do to detect/interfere.

- Create functions that mimic user activities (mouse pathing, aimless mouse wondering, random scrolls, clicks, text selections, etc)

- Feed the outputs of these functions into the functions that you hooked.

- Rip out whatever information you want from the Chrome data structures in memory. Can probably reuse CDP code here.

After all this, the only challenge that would remain is to perfect the input functions that are supposed to mimic a legitimate user. Depending on how sophisticated these anti-bot systems can/will get, you may also need to cultivate user browsing habit profiles to enter advertising/spying databases as real humans.


> It seems to me that playing a cat and mouse game with these anti-bot systems is unnecessary. Design a system which mimics a legitimate user to such a degree that it's either indistinguishable from an actual user or would produce an unacceptable level of false positives for the detection system.

This is the most common misconception, challenges you face with browser automation at scale are not *automation* challenges.

You can use real human input, by having actual humans doing the input and you will still get blocked.

Automation at scale means running dozens to 100s of browser instances concurrently on the same hardware, then after you mitigate IP related issues is when you start running into actual challenges that are completely different from the actual automation part.

You have to research all the little quirks browsers have through the various APIs that they offer and then compare that data to real world data before you can start to actually fix the problems.


There are browsers which randomize such fingerprints such as Brave. The web page does not have any insight into your hardware that you cannot mitigate by having the browser fake the responses.

You can also use Linux features such as namespaces & TUN's[1] to properly utilize proxies. Something I noticed is that Chrome under --proxy-server=socks5:// is incapable of using HTTP3 (UDP) for example, perhaps a deliberate oversight.

[1] <https://github.com/xjasonlyu/tun2socks>


When scaling browser automation, generating random fingerprints for most common high entropy data points is counterproductive. It just ends up lowering your trust score and shifts attention to other browser properties with less entropy, making those primary identifiers.

For example, degrading canvas, WebGL, or WebGPU fingerprints (e.g., by introducing noise like Brave does) might lead anti-bot systems to either ignore them or punish you with captchas. Once ignored, other signals, such as screen resolution (just an example), become more important. While this helps people with privacy by blending in with users and a single user visiting a website normally will probably not notice much, an influx of multiple users with degraded fingerprints and similar resolutions become easy to detect and might get a captcha or get blocked (e.g. 30-50+ browser sessions generating cookies for a specific captcha concurrently).

You can spoof multiple resolutions and then add some other properties, but it requires consistency across all of them, which can come down to weird browser specific quirks as well as whatever the data set of the anti bot vendor contains (regardless of how accurate). There are only so many plausible values for each low entropy data point that anti-bot systems will give you a high score for, forcing you to spoof as many data points as possible to maintain a high trust score across many concurrent sessions and eventually scale back or hit a limit for your operation, or deal with captchas by solving them and lose to the competition that doesn't have to do that.

Fingerprinting at scale isn’t just about spoofing individual data points - it’s about aligning all points in a realistic way and knowing which and how they relate to eachother, which requires extensive data and research.

On proxies: flagged IPs with residential ASNs often work fine if the overall trust score is high, but degraded fingerprints like Brave’s can undermine that advantage and then it becomes a lot more important, though it's always nice to eliminate if you are able to do so.


Even a single script that performs actions too quickly on a website can trigger anti-bot measures, even if the bot isn't detected directly.


I'm not denying that, I'm saying it's not a difficult challenge to solve when u compare it to the others I mentioned.


The biggest issue with going from a home machine to a server is that you may lose having a "residential IP address", which is something that you'll want to have in order to prevent automation from being blocked outright. Hence the popularity of residential proxies. However, some servers live in a residential IP space, which makes them optimal for running web automation in. As was partially covered in https://www.youtube.com/watch?v=Mr90iQmNsKM, GitHub Actions appears to live in a "Residential IP space", which makes it a good server choice for web automation.


IP is definitely not the biggest issue in my experience, as proxies are required at scale regardless, unless you get into more theoretical areas like p0f.

The biggest issues are the ones that aren't obvious or easily tested for like missing a particular font, being on an abnormal gfx driver that produces an unidentified hash for particular fingerprint methods, not having certain APIs available that require browser patches, and then these aspects will differ between anti bot vendors and the data sets that they have.

The reason they can be hard to test for is that everything is based on a trust score, which is potentially influenced by anything from website load to things tied to your personal session and for some vendors optionally even input data.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: