I miss good and responsive support. I mostly deal with Google GCP and it's soul draining. They always take close to 24 hours to reply to any of your comment, even if you immediately reply you won't hear from them until next day. They cherry pick one line to answer and their replies often contradict rest of the facts presented in the issue report. They shield product engineers from you, acting as intermediary, so now there are two 24 hour hops each way. When product engineer also give nonsense answer, support doesn't enter into discussion with them on your behalf to get something reasonable, no they just pass it along.
For the price they charge for support I'd expect more quality.
There is very little incentive for engineers to spend a lot of time on support at big tech. In fact it detracts time from their projects which tie to their performance review.
Internally big tech would need to prioritise customer satisfaction as some OKR to incentivise engineers - but this doesn’t happen for some reason. As a result you’re at the whim of whether the engineer cares enough to spend time on your case.
Note in my experience AWS has great support - so their internal incentives seem to be aligned with customers.
This is not a difficult problem to solve, if the corporation wants it solved.
We managed it by assigning someone to support on a rotating weekly basis. During that week, project hours missed due to support tasks were acceptable. If you had a critical task due and it was your support week, you could swap support with someone, or we'd reassign the task. But with Software being the "last line of defense" after Field Service and Product Technical Support, we couldn't just ignore it.
There are solutions. You just have to act like you want it solved.
I honestly think this is the best way to do it. I know a lot of software developers don't want to do this and I understand why.
But there's nothing like an engineer seeing the struggles people have for themselves and choosing to fix it so they don't have to field that damned question anymore (I'm saying this tongue-in-cheek).
Plus, you know as well as I do, the more people between your customers and your engineers, the more agenda's get injected into the mix.
The company had a voluntary program (for a while) where you got to shadow specific local customers for a day. It was unbelievable. I had been there about 2 hours before I had a long list of potential product ideas just from watching this guy struggle. Not just with our product, but mainly with the interaction between all the different machines and instruments he had to use to get his work done.
I see comments like this on HN frequently and I wonder what is going on with BigTech. I guess I wouldn't make it. Support is fun. You're solving mysteries and (hopefully) fixing them within the constraints of the design.
It's the performance culture at these companies. An engineer gets evaluated on metrics they move the needle on. At very OKR centric companies like Google, these metrics have to tie into high level business goals like revenue growth. I imagine it is pretty difficult to match customer support metrics to these, hence it's not prioritised by engineers or the company.
Support engineers are first or second line support, and typically limit their scope to customer errors/pebkacs. If you find a legitimate bug, it is typically escalated to the SWE team's oncall who maintain the service. For example, I found a legitimate bug in ALBs (new at the time) and was talking directly with the SWEs who built ALBs.
This makes sense if you consider the size of the product engineering team vs the amount of customers out there. For every engineer there's probably hundreds or thousands of customers. If they had to engage immediately with every support case, there would be zero progress on any other job.
The problem actually comes from the fact that big tech is becoming increasingly cheap on creating good support organizations. Experienced support engineers are fired and replaced with outsourced low-cost inexperienced personnel. In most cases, issues can be resolved or worked around with the help of a support engineer with access to some extra knobs. When those engineers are removed and are replaced with people who act like a pipe for cat to send the customer's stdout to product engineers, you get what you describe.
I recently had a conversation with a large tech company that went basically "We saved a ton of money by replacing all of tier 1 support with AI chatbots so that the problem can be identified and routed to the correct tier 2 person to triage before it goes to a tier 3 person if it needs to. Next we're replacing all of the tier 2 people and the only time we have to get an expensive human involved are the weirdest/most difficult problems." My reply: "where do tier 3 people come from if they have no experience at tier 1 or 2". Queue the sound of crickets...
That perfectly described my team. Everyone hated on call. We shared an on call and were responsible for multiple systems that few people understood. Customers have questions and you just don't know and it's not documented. Every week you'd be on call there'd be 5-10 open support cases already open from last week. You have no context and no docs so you give a bs answer and pass it to the next customer.
In the case of GCP, they outsource their support services to smaller companies who (as I’ve been told) have no access to any details of the customer’s environment, and may sometimes escalate to a contact point within Google if the issue is not solved after some level of troubleshooting.
The list of companies providing support for GCP can be found in their subprocessors list.
If server has strict iptables policy for incoming packets, you would still need to go to iptables allow second port.
so if you need to iptables anyway, why just not redirect without editing sshd config? the less modifications the better chance to not forget to revert them
An advantage of “iptables magic” is that the service doesn’t ever have to run as root: I’ve done this before to great success to have a web server be accessible on standard ports even though it was running as an unprivileged user.
There's a capabilities thing you can do to add the permission to a process without changing users. You can also have it bind to the port as root then drop permissions afterwards.
The reason privileged ports require root is so that they cannot be easily intercepted by userland processes.
The entire reason for the root requirement is this; it's expected that after your port assignment you drop your privileges to a lesser user. But requiring root to bind is a feature, not a bug.
If you do not have this then intentionally crashing the process for a user and rebinding the port as a standard user (even `nobody`) is possible.
This means if you break into someones mail server and run as the mail user, you would be able to bind ssh ports (if they are not privileged ports <1024).
Of course your mail server should be running as the `mail` user or some equivelent, because only the binding of the ports should be done on startup as root and then it should drop into the mail user.
I remember doing some relatively complex stuff with SSH config 15 or 20 years ago with IP filtering, different users having different chroots, IP forwarding rules based users connecting and rules around what SSH clients / protocols were allowed. Part of that was also defining custom ports too. All of which were just defined in sshd_config.
None of this was new stuff back then. It just wasn’t well blogged (in fact it was so poorly written about that my very first blog post was on exactly this topic. Blog is long gone now though). However if anyone took the time to read the man pages, you’d see all the functionality is already backed into openssh
Honestly I also wonder now, I've read through the OpenSSH release notes which is one single HTML page with all releases since 2000, but unfortunately couldn't quickly spot when they would have introduced this change.
Unlikely. While selinux is more than 20 years old, it was merged into the mainline kernel a few years after its initial release and it would have taken a while longer for that kernel to trickle down to distros, and then sysadmins installing.
I've exposed SSH over tor in the past, so I can at least get into the box to set up a workaround like this if needed. (usually ddns/DMZ setup being broken)
Never had many login attempts that weren't me through it, but had fail2ban installed just in case.
This lets you be very explicit in watching this run and killing when done.
Socat also lets you route networks through old serial ports, log all data going over a connection to a file, and even join completely different protocols.
Fun past projects based on socat; a serial port->socat to tcp out->socat on another computer to listen->a serial port out. Basically this created a serial port that worked over a satellite for a customer doing some remote monitoring so they could set an alarm if something failed (a lot of equipment only has serial connectivity for status).
Socat would make the connections to ssh show up from the machine running socat, rather than the origin, which might have auditing/logging implications.
Yeah the quick and dirty socat way can be an advantage or disadvantage. I like it for the times i just need to quickly get something through as a once off exactly because it runs when i run it. I feel like hacking some port mapping up during a disaster as more of a socat thing than an iptables thing but hey if it works it works.
At the same time free public Wifi is just that, a "we do what we can and try to keep the infra safe at low cost".
There is a lot of tooling to filter out bad behavior by HTTP. When it comes to other protocols, not so much. Much easier to block other ports then end up with your IP range on a block list.
What I'd like to see, is public Wifi setup with something akin to "HTTP/HTTPS - wide open" and "all other ports, you can connect to 1-5 machines an hour" or something.
Blocks useful access for worms, Trojans, etc but still lets you get out once.
Trojans only need to connect to one server (c&c) and they usually use http(s). Worms are almost nonexistent nowadays. Spambots are a concern (some of which could be consider a worm if you squint hard enough), but they only need to connect to two servers (configured SMTP server and C&C).
it uses high ports that shouldn't be blocked. usually it is already running when i open the laptop. saves a lot of time. as an added benefit, it sometimes even works when the actual wifi connection is not yet authorized at the portal because that often only blocks tcp and not udp.
I found this recently too: it's a bit trick to setup, but it works very well for my usecase and I did not notice any performance issues.
Basically your services listen on localhost:port, sslhd listens on hostname:port, inspects the packets and forwards them (transparently) to localhost:port.
If you put everything on port 443 it's very unlikely they will ever be blocked.
Good reason to have backup connection means, like ssh in https. (Or at the time of the article, since it's possible even https was blocked, ssh over dns.)
even today, while traveling through the US, there are lots of wifi access points that won't allow you to ssh. Sometimes I just fire up my VPN and then I can ssh again.
Social engineering has limits and each individual has unique vulnerabilities. It's not possible to call in and speak a single sentence compelling any agent who hears it to immediately burn the office building down.
Some human vulnerabilities are surprisingly common. A lot of scammers follow scripts and formula. Of course, to coax arson would be difficult. But life-devastating incidents like emptying out the entire bank account, leaking secrets, causing self-harm, etc. are not unheard of.
If you can sneak a <blink> tag into the ticket system, you likely can sneak a in <script> or <iframe> tag as well... I'm sure input sanitization was already a thing preached back then but ignored by many web developers...
For the price they charge for support I'd expect more quality.