Someone needs to design a programming language and an IDE (and possibly a new OS too) with great debugging as the primary goal. Debugger and IDE support is always just thrown on later in every new language these days.
"Omniscient debugging" as seen in https://pernos.co/ is the holy grail. Time-travel debugging would also be great.
People are far too obsessed with static type checking. A lot of this time would be better spent invested in live debugging tools.
When I'm editing my code I want to see exactly what _value_ each variable contains, the type really doesn't matter so much. Wallaby.js/Console.ninja is a great example of this.
Good debugging, especially deterministic record/replay is usually complicated by the OS. I often wonder what an OS would look like if designed with debugging as a top priority.
> People are far too obsessed with static type checking. A lot of this time would be better spent invested in live debugging tools.
I mean.. the two are tightly related. The type represents the outer bounds of values possible. Types will give you as much utility as you put into them (or as the language allows).
I agree in general, i just think you massively undersell types haha. It's not either or, it's AND. Always AND.
I agree. Even with excellent debugging, I think types will allow me to need to debug less. This seems ideal, because debugging means things are breaking.
Types also mean that I can describe my program in ways others can understand and trust. I don’t want to pass off potentially broken programs to other people and say don’t worry, the debugging experience is great.
Types have always been a hard sell for me. I honestly never have issues with types being wrong when I write code, and I understand that this is missing the point. The point is that the types define a very concrete interface and we know up front when the interfaces break, and can statically check that the interface is correct everywhere.
I feel that mathematical types are so far distant from the type system of something like C or C++ that the term foot guns itself somewhat.
I've worked with too many people who insist there's nothing wrong with the way they are coding, while I've had conversations with 4 different people who have gotten tired of having to clean up after them.
I can't take anyone's word anymore that a tool is useless because they don't need it. We as humans need tons of things we pretend that we don't. It's positively pathological.
Yeah I don’t think that my point was that I don’t need types, but more closer to yours that I’ll never really have an objective way to know if I need types.
> Types are just human and machine readable documentation
This is only possibly true in the context of languages with poor static typing ecosystems, and even then I don't think I fully agree.
When a type-checker admits a program, it is because it has proved the program to be free from certain classes of errors. The more expressive your type system, the more elaborate and precise your claims about your program's behavior can be. This is fundamentally distinct from providing documentation that tells your users "please don't input values outside this range".
Early type systems (like C's) are woefully incapable of any serious specification. Modern type systems are significantly more expressive, and are therefore significantly more capable than you suggest. Types in the context of these expressive type systems are logical propositions, and the programs we write are their proofs.
If you're trying to simulate a combinatorial explosion of possible states, reigning in the possible values would certainly help with maintaining that illusion.
Pivoted to record and replay. Developers can put assertion + mocking around the replay. We are working on a feature that shows what _value_ each variable contains. Check unlogged.io
time travel debugger for Java, since we worked in financial domain where Java was predominant. We couldn't debug on remote machines for the lack of access. We had to log a lot and it felt like guesswork. Having a debugger with a back button made a lot of sense. We logged everything by default and reverse mapped it with the code, so that we could reverse-F8.
The original Dart language designers gave Dart Smalltalk semantics with JavaScript syntax. But Dart’s users rebelled and forced Dart to adopt a more compile-time-type-checked type system.
Some of the original Dart designers have developed a new language Toit that is Smalltalk-like: https://toit.io/company/about
Yup. The OS contains the debugger process contains a copy of the debugger contains a copy of the runtime contains a copy of the program state contains a copy of the linked system libraries and such and, somewhere, in some tiny section of the executable behemoth, the code that I wrote to print hello world.
Always wondered why we usually stop at adding first-class features at the code. Or go “Oooh” if a language lets you go one step up the chain to eg metaprogramming methods.
The whole chain is where we are working; the whole chain should be the first-class citizen of the language/tool.
pretty neat. But what I need the most right now is a way to visualize information. I work on computer fluid dynamics and there I have:
- super intricate algorithms with lots of variables running around
- lots of huge numpy arrays storing the bulk of the data
I find the current debuggers lacking here:
- showing numpy arrays in colours, easily changing color scales, etc. is not easy (you can do your own visualization, but then it's code); explore specific parts of the array in detail; link that to other arrays, etc.
- build watches panels where you can mix analysis and values extracted from code. I can't count the number of time I freeze the execution, export data to R, then do analysis there, then fix the code. It'd be nice to have that integrated in the IDE.
- have watches that record values as execution runs (I guess that's one of the stuff omniscient debugging does)
And while I'm at it:
- why can't I cut paste formulas, pictures, etc. in my code ?? (no Jupyter, you're ok for a nice paper like presentation but not for code with hundreds of thousands of lines)
What if Jupyter was ok for code with hundreds of thousands of lines? I suspect it might be.. I think nbdev ameliorates my major gripes with jupyter: it lets you easily maintain a bidirectional jupyter <-> (library as standard python files) mapping, while also giving you the excellent presentational flexibility of Jupyter. Plus a good looking docs site, for free. Since the code is exported as scripts, you can just `pip install -e .` and get all your IDE integration as normal.
Could you have special markdown cells that have mixed python code blocks and other inline media that gets converted to stripped python cells? You might be able to write an ipython ‘magic’ for something like that
Charles Simonyi's Intentional Software was doing that back in the days of Windows NT https://www.youtube.com/watch?v=tSnnfUj1XCQ But, they stayed insular and enterprise right up until Microsoft bought them back.
We mortals barely got to see or hear anything about it. I heard one podcast interview of a developer there who said they basically represent the code as an s-expression tree with a bunch of metadata on each node.
Yeah sure but I'd like to sprinkle lots of little codes in many different places, ad hoc. This happens while I'm debugging issues 'cos I have to integrate information from various places. Note also that it's not like trying to figure a clearly visible bug: sometime sI have to wander through literally hundreds of data arrays just to find what's going wrong. Since I do that close to the code (modifying bits here and there to see how it influences things), it'd be nice to have a "data analysis debug context" very close to the code.
I'm sure there's a space here where one can provide tools. For example a drop in "record_this(object)" method integrated with the debugger so that I can look at the results, without the need to build a logging structure to do it myself.
I had an idea to trace all usages of values. So you can see the entire history of how a value came to be. All the places it went. All the transformations too.
Also, I think code should be written so that it can be easily visualized. Nothing else! Code is for humans to read, not machines - that's what compilers are for.
Code instrumentation is usually for code coverage...but for debugging it can be great too.
Indeed, on top of stacktraces it would be great to have 'value traces' (with functional programming I suppose those would be the same). Especially in testing, what's important are the inputs with the system under test.
True. Thankfully RR, which Pernosco is built upon, is open source.
You can do a very simple form of value tracing with just RR by setting a watchpoint on the memory where the value is stored, and then using the “reverse continue” command to run the program backwards(!). The watchpoint will trigger when the memory changes, when will happen whenever it is “overwritten” by the previous value. This is exactly the point where the value you are tracing was written into memory.
This is not as precise as Pernosco’s value traces, but it is still immensely useful. It makes debugging buffer overruns, use after free, stack or heap corruption, etc, etc extremely easy. Since traditional methods of detecting these types of bugs are a lot more work, you can earn an astounding rate as a contractor fixing people's C++ code using RR or Pernosco.
While I also wish Pernosco were open source, there is at least reason to be glad that it is a managed service rather than something we run on our laptops. I talked to Roc about self–hosting (which is available) once and he mentioned that a trace of an application like Firefox gets turned into a database taking up dozens or hundreds of gigabytes of disk space. The only reason Pernosco can turn them around in just a minute or two is that each trace uploaded is given to it’s own 36–core instance for the conversion. Even if it were open source, we wouldn’t exactly be running it on our laptops at the beach.
> People are far too obsessed with static type checking.
I thought this too. I'm a huge Ruby fan, and also Clojure and Elixir. In Clojure you have Spec (https://clojure.org/guides/spec) which is pretty powerful if you choose to use it. And in Elixir, you have pattern matching that can take you quite a long distance without need for specifying explicit types. ...
But lately I've been learning Swift, and now I'm in favor of explicit static typing. (You don't have to be explicit always with Swift, but it's not much trouble to be so... and it's necessary sometimes.) And then switching back to Ruby, I find myself wishing for that visible type clarity on function params. Am I dealing with an object and needing to reference properties? Is this a hash and I can access key/value pairs? A developer using some library may have to dig in and read library source to really know what's expected when types are not required or listed in the function signature.
What I would like to see is Elixir style pattern matching with static types. Then you can "duck type" in terms of the shape of the input as long as it conforms to some subset of data+type.
I've been working on an OS for a few years now that addresses this pretty much directly. Not ready to announce anything but there are certainly a group of people doing OSdev along these lines, myself included.
> Someone needs to design a programming language and an IDE (and possibly a new OS too) with great debugging as the primary goal. Debugger and IDE support is always just thrown on later in every new language these days.
Already designed. Called AnimationCPU platform and a new ACPU OS with ACPUL programming language and real-time time travel debugger. But there is not much marketing here, so you can only watch some demos:
>> People are far too obsessed with static type checking.
I have to fully agree with this. At least here in HN, I see all the time comments "language is good, but does not have static typing." As it was THE thing needed.
Of course it helps, but I think is greatly exaggerated.
I think many people code as some of my colleges did Physics in the university: they looked at the data available in the problem description, and searched formulas to plug the data, just looking at the units, where the output unit was the correct for the answer required. In a similar way I see people smashing things at APIs without reading the documentation, using bad naming, having tens of variables in 10 line functions, all because "no problem, if something is wrong, it won't compile!".
I have written tens of thousands of lines of code in dynamic typed languages, and I can count and remember the few times I had a problem related to types. Out of those, only 2 times I had a somewhat difficult time debugging (couple of hours).
> People are far too obsessed with static type checking. A lot of this time would be better spent invested in live debugging tools.
I disagree. Time spent debugging is time wasted. It's a non-repeatable sunk cost. Don't get me wrong, I love debugging - it's like solving a mystery in real-time - but the competitor to existing debuggers is not a better debugger, it's to obviate the need for one.
Friendly suggestion to peruse Jetbrain's third-party data-sharing agreement while considering what information of a user, even in theory, it might forbid them to share with anyone they wish for any reason they please.
Nothing listed there is anything particularly problematic so far as I can see. Can you be more specific?
If you're saying that it's sufficiently vague as to not actually limit them in any way, then I'm wondering what a better privacy policy would look like. One that gives them proper leeway for the legitimate day-to-day workings of a company, but which manages to not be just as vague.
One thing worth remembering here is that the alternatives you mention are exclusively IDEs, whereas the JetBrains privacy policy covers the data sharing they do for their whole ecosystem of products and services, not just their IDEs.
Unfortunately JetBrains, like many, are unwilling to pass up on exploiting the good margins on data extraction - even at the expense of their products and userbase.
Interesting that you didn't even mention their opensource community edition: https://github.com/JetBrains/intellij-community wich powers plenty of other open source IDEs, yet you push vscodium
The ferocity with which that ask comes across has me thinking there's only a very limited set of right answers! Promise you won't sue me or anything if my answer turns out to be unsatisfactory? ;)
The only comment I posted on this entire post was the one[0] where I said: "Which also seems to be covered by the same privacy policy". I purposefully used the word "seems" because I'm not entirely sure whether that's true, and also included a link so people can check it out for themselves.
If you honestly consider this grifting and intentional misdirection, then I'd be interested to hear what you believe I'm trying to achieve here.
Discussion and subthreads off the main thread flow organically, and it's OK for them to not stay strictly on the main topic of the post (they're still on the topic of debugging, they haven't suddenly started talking about cars or stamp collecting).
Sometimes the best insights and comments are found in a side-thread.
Besides, this is one of several comments you've left with the same putdowns in this thread. They have added more irrelevant to the discussion noise than their parent comments ever did.
I intend for this to come across as somewhat philosophical, and I hope people can see how this as beneficial. We need more grounding in philosophy here in the technology world.
> Discussion and subthreads off the main thread flow organically, and it's OK for them to not stay strictly on the main topic of the post
Sure, it is "OK" per the HN Guidelines.
Also, yes, these things happen. Lots of things happen, and they have a range of desirability. (See also the naturalistic fallacy, is-ought fallacy, and appeal to nature.)
Not all comments and subthreads are equally useful to us. It is ok for me or anyone to shape the conversation in ways that they think are beneficial. We don't have to agree on the fundamental principles in play here; we have different approaches and probably values.
> Besides, this is one of several comments you've left with the same putdowns in this thread.
It is completely inaccurate to call my comments "putdowns" which are defined as "snub, disparaging remark, insult, slight, affront, rebuff, sneer, disparagement, humiliation, slap in the face, barb, jibe...".
These comments are constructive criticisms. (You may not like that they involve an explicitly stated redirection. Not everyone may be as up-front as I am about what I'm doing, but redirection is common, useful, and essential.) People can take it, leave it, and/or comment about it. This is how discussion can work, and it is also how discussion should work in many cases.
> They have added more irrelevant to the discussion noise than their parent comments ever did.
It seems inconsistent to say "Discussion and subthreads off the main thread flow organically" while criticizing my comment using a comment of your own, don't you think?
If it is ok for conversations to meander like you suggest, then it is ok to have conversations about what we're talking about, including suggestions about where to take them.
Take Away: You expressed your point of view; this is fine. I happen to disagree, and I've given my reasoning. It has been civil, and it is ok.
On Topic Drift: Let me ask you this: would you rather have a thread about debugging wander off into, e.g. criticisms of JetBrains the company? I personally would not. You might. This is fine.
Governance: There are questions of "what is best" that are not easily agreed upon. The Hacker News (HN) Guidelines are a good start, but they are incomplete. We also don't get a direct influence in what they are. They are "handed down" to us. We're participants here without direct influence on policy or goverance. We can vote up and down, flag, or persuade with discussion. We don't have other ways to guide policy or govern here on HN. There is are tracks/paths to increased moderation abilities, such as with Stack Exchange web sites or Wikipedia. There are no "meta" channels either.
Infighting By Design: Due to the lack of proper governance, these kinds of discussions happen a lot, and they can be divisive. We should not "blame" ourselves when they happen. If anything, the unifying 'problem' is Hacker News itself. This is a design issue. Human nature is predictable. Until HN offers more ways to handle it directly, these kinds of conversations will keep happening.
My goal here?: I hope I've elevated the discussion beyond what could have been a rather uninteresting clash over simplistic ideas over "what is on-topic?" or "what are the rules?" or even "what is right?".
Personal Motivations: Both open discussions and ethics matter tremendously. Too often people don't address the elephant in the room, which often involves "how do we want these conversations to work"? while also admitting "we don't really have the levers of control we might want." Throughout a 20+ year career, I rarely see thoughtful, substantive conversations about these topics. Too many technology people skip right past them.
[This is a update / rewrite of the sibling comment; I lost the ability to edit the other comment.]
I intend for this to come across as somewhat philosophical, and I hope people can expand their points of view. We need more grounding in philosophy here in the technology world -- ways to combat a severely limited set of beliefs and habits that have grown out of a insular culture.
> Discussion and subthreads off the main thread flow organically, and it's OK for them to not stay strictly on the main topic of the post
Sure, it is "OK" per the HN Guidelines.
Also, yes, these things happen. Lots of things happen, and they have a range of desirability. (See also the naturalistic fallacy, is-ought fallacy, and appeal to nature.)
Not all comments and subthreads are equally useful to us. It is ok for me or anyone to shape the conversation in ways that they think are beneficial. We don't have to agree on the fundamental principles in play here; we have different approaches and probably values.
> Besides, this is one of several comments you've left with the same putdowns in this thread.
It is completely inaccurate to call my comments "putdowns" which are defined as "snub, disparaging remark, insult, slight, affront, rebuff, sneer, disparagement, humiliation, slap in the face, barb, jibe...".
These comments are constructive criticisms. (You may not like that they involve an explicitly stated redirection. Not everyone may be as up-front as I am about what I'm doing, but redirection is common, useful, and essential.) People can take it, leave it, and/or comment about it. This is how discussion can work, and it is also how discussion should work in many cases.
> They have added more irrelevant to the discussion noise than their parent comments ever did.
My comments serve the purpose of refocusing conversation towards the primary topic. Calling that irrelevant seems silly to me. If you call my comment irrelevant, then logic requires a similar labeling for your comment. [3]
It would be inconsistent to say "Discussion and subthreads off the main thread flow organically" while also saying my comment doesn't belong. (You didn't quite say that, but it is very much implied by your tone.)
If it is ok for conversations to meander like you suggest, then it is ok to have conversations about what we're talking about, including suggestions about where to take them.
You expressed your point of view. I happen to disagree, and I've given my reasoning. It has been civil. Good job, us.
On Topic Drift: Let me ask you this: would you rather have a thread about debugging wander off into, e.g. criticisms of JetBrains the company? I personally would not. You might.
Governance: There are questions of "what is best" that are not easily agreed upon. The Hacker News (HN) Guidelines are a good start, but they are incomplete. We also don't get a direct influence in what they are. They are "handed down" to us. We're participants here without direct influence on policy or goverance. We can vote up and down, flag, or persuade with discussion. We don't have other ways to guide policy or govern here on HN. There is are tracks/paths to increased moderation abilities, such as with Stack Exchange web sites or Wikipedia. There are no "meta" channels either.
Infighting By Design: Due to the lack of proper governance, these kinds of discussions happen a lot, and they can be divisive. We should not "blame" ourselves when they happen. If anything, the unifying 'problem' is Hacker News itself. This is a design issue. Human nature is predictable. Until HN offers more ways to handle it directly, these kinds of conversations will keep happening.
My goal here?: I hope I've elevated the discussion beyond what could have been a rather uninteresting clash over simplistic ideas over "what is on-topic?" or "what are the rules?" or even "what is right?".
Personal Motivations: Both open discussions and ethics matter tremendously. Too often people don't address the elephant in the room, which often involves "how do we want these conversations to work"? while also admitting "we don't really have the levers of control we might want." Throughout a 20+ year career, I rarely see thoughtful, substantive conversations about these topics. Too many technology people skip right past them.
>It is completely inaccurate to call my comments "putdowns" which are defined as "snub, disparaging remark, insult, slight, affront, rebuff, sneer, disparagement, humiliation, slap in the face, barb, jibe...".
You mean like the comments calling other people out for not commenting to your liking, and declaring how you'll "help move [their comment] down the page" in order to curate "the most useful discussion"?
Or the part where you describe another person's perspective as '"pro-privacy" grift', and close by literally telling them to "piss off"?
I'll be very open: I find it unfortunate when I take the time to write out my thinking at length without seeing much result. Maybe it had some effect or maybe not. Either way, so far, you seem stuck on the topic of "was this an insult?". (It seems like a tiny train wreck to me.)
I didn't intend it to be an insult (see other comments, with detailed explanations). You could take me at my word. That would enable follow-up discussion of more valuable things.
I'm not saying I get to nail down exactly what "valuable" means. By valuable, I just mean things that benefit all people involved over a longer term. I intentionally left the details vague.
I'm not asking you to agree with everything I say. I don't want that at all. But I am asking you to go broader and discuss something more. I think we can build bridges. Questioning motives (in the pejorative sense, meaning 'assuming the worst') goes in the opposite direction.
A tiny train wreck may seem unimportant. True. It is nothing to worry about when it happens once. But when you look across human communication, this pattern happens what millions a times a day over textual communication channels. It adds up. I want to do something small to help fix this. It is a "bug" with the software people have in their heads. It is a really big deal; it underlies so many disagreements and misunderstandings. If uncorrected, friends and family members sever relationships, often largely due to this underlying bug. (i.e. "She intended to X, and I can't stand for that.")
I believe we need to socialize that assuming intent is an active contributor to damaging our social connections and therefore world.
Think I'm exaggerating? Tell me. What small changes can we realistically make to many people's worldview that would have as big of an effect? This is one of the biggest ones, and it is a relatively easy sell.
>I didn't write that or anything like it. Where did you find this quote?
Apparently you didn't. Somebody named Xeamek did, but your comments were shown next to each other, both with the body hidden and the title only shown and marked "[flagged][dead]", so I accidentally opened his when trying to see what you wrote.
Thank you for checking and writing back. Reputations matter.
> Apparently you didn't. Somebody named Xeamek did, but your comments were shown next to each other, both with the body hidden and the title only shown and marked "[flagged][dead]", so I accidentally opened his when trying to see what you wrote.
I hope you realize that "apparently" suggests uncertainty. I am completely certain I wrote no such thing. I think you are too -- why not say it?
Overall, this wasn't the apology I was hoping for.
> This thread of discussion isn't central to the idea of predictive debugging. In the spirit of my tiny influence on curating the most useful discussion, I'm going to help move it down the page. I'm not commenting one way or the other on the particular claims. (P.S. Having only one ranking mechanism inevitably leads to the conflation of user intentions. There are better ways out there; for example, I recommend the side-walking crustacean forum.)
I can't control your interpretation, but I can assure you I didn't intend to insult anyone. Not all criticisms are putdowns.
Think of my comment this way: it was an explanation of my downvote, which gives the commenter some feedback. Most downvotes don't give feedback. Per the golden rule, I prefer constructive criticism, so I gave it.
Your interpretation (claiming that my comment was a putdown) was uncharitable. Your choice when quoting me was selective and did not give the whole context.
No one acts in isolation. We are part of the patterns of many cultures. I regret that many people, including myself, have a tendency to rapidly escalate a situation. [1] They often jump from a validfeeling of discomfort to an accusation of intended harm. This is unwise, unfair, and destructive. We need to be able to criticize ideas without someone taking personal affront.
You don't seem to acknowledge that I am aiming for a higher principle: useful, on-topic discussion. You don't have to agree with the principle to acknowledge that is the motivator of my comment.
I'm criticizing your ideas. I'm hopeful if we met in person we could have an amicable conversation. If you like, we can do a video call. I'm serious. These kinds of misunderstandings are small but damaging to the fabric of what we have here.
[1] I'm rather liberal when it comes to policy solutions. I say this for people that are swayed by tribalism: in many ways, I'm metaphorically 'on the same team' as people that speak up for the less fortunate. Standing up for civil rights and equality under the law is laudable. But this is quite different from uncharitably assuming bad intentions and "calling out" people. The latter undermines the core principles of civil discussion and probabilistic reasoning. In my view, civility and evidence work together quite well. Here's how: once you recognize the limits of your ability to divine someone else's intentions, particularly with only written text, it becomes illogical to accuse them of malice.
Reading the title, my mind conjured what I consider to be the utopia of debugging: a probabilistic assessment of likely failure points across the entire stack, backed by reasoning over the relationships between the components. From there, generate a prioritized actionable debugging task list. An example might be "test input X on function Y so we can rule out Z".
(Aside: The "so we can" part is the key to focusing the effort, otherwise debugging can turn into a technical-debt paydown-party. There is a time and place for that, but maybe not while your service grinds to a halt for unknown reasons.)
Therefore, you can imagine why reading the article was extremely disappointing. No offense, JetBrains; this is still probably pretty good. Certainly a step in the right direction. And it is predictive debugging on a local scale.
Context: I guess you could say I'm a harsh critic of the comically inane ways that debugging often happens. Often debugging happens at the worst time: when a person's cognitive capabilities are drained and stress is maximum. This is not the set of conditions we want. Debugging is quite different than designing. Mental flexibility and triage is key.
Activating type checking mode in VSCode was a game changer for me when running python, help catch loads of the edge cases for correctly for some production code
If you think turning on typechecking was a game changer, I can't wait to see the look on your face while using the more intelligent editor PyCharm; the kinds of things in catches are asymptotically magic
Has JetBrains ever discussed putting out some sort of generic LSP oriented product(s)?
I own a couple JetBrains product for very select things but as much as i enjoy the company i will never use them for my actual code.. because i'm a 20 year addict of Terminal based editors (currently on Helix).
I'd love to see the considerable effort they put towards tooling available to the Terminal oriented audiences. Yea, probably small audience.. but i just often see this stuff and think "What if this was an LSP? I'd purchase it in a heartbeat!"
Especially since multi-LSP editing is apparently viable. Eg Helix supports multiple LSPs for a single context, now. Though i'm not clear on the specifics yet, as i've not tried it yet.
I really want to give them more money. I just don't want to use their UIs for my code.
I don't know if the sort of deep refactoring and other integrations they offer in their IDEs would be possible in the LSP protocol as it exists today. I believe debuggers for vscode have a similar DAP protocol.
Any way, my point is, they have something great that works well for their users, so it doesn't seem like a very large market to make a less powerful version for a small set of new users.
I also suspect that the sort of developers who don't use their IDEs but instead use an LSP based editor that is free/open-source would be unlikely to _pay_ for Jetbrains LSP based offering, however high quality/unique it might be.
How might a Language Server Protocol (LSP) work for debugging? And predictive debugging in particular? Any examples you've seen? Do LSPs have a mechanism by which debugging could work? (I haven't researched this yet.)
Note: I ask the above questions in an attempt to move the conversation towards predictive debugging. Like many, I visited this thread because I'm interested in the core topic: debugging. We're probably not here for a general discussion about the pros and cons of JetBrains / non open-source software / IDEs.
Thanks for sharing. I can see why are you excited about this. For others who might not click through:
> What is the Debug Adapter Protocol?
> Adding a debugger for a new language to an IDE or editor is not only a significant effort, but it is also frustrating that this effort can not be easily amortized over multiple development tools, as each tool uses different APIs for implementing the same feature.
> The idea behind the Debug Adapter Protocol (DAP) is to abstract the way how the debugging support of development tools communicates with debuggers or runtimes into a protocol. Since it is unrealistic to assume that existing debuggers or runtimes adopt this protocol any time soon, we rather assume that an intermediary component - a so called Debug Adapter - adapts an existing debugger or runtime to the Debug Adapter Protocol.
Hello, I am an LSP bot designed to help clarify human language interaction. The above commenter used the term LSP without defining it. LSP means "Language Server Protocol." ~ Your friendly neighborhood Hacker News LSP Bot
Sounds like a logical step forward from current debuggers!
I had some colleagues who bragged of not needing debuggers for their workflow... Well, sometimes I really do.
It does look very nice, but as with all fancy debugger features I wonder if I'd spend more of my life debugging the debugger when it doesn't debug than it'd save me.
I also think that while a debugger is a great and useful tool, being able to debug without one is a requirement, because sometimes you don't have one (or it stopped working properly - see recent Android Studio releases) and if one can't do it just using logs one is really stuck. So it's worth getting practice at that.
It looks like a combination of static analysis and live execution of what they call "pure" parts of the code, i.e. code that doesn't affect state outside the current function.
Nice to see something like this finally being made. I always wondered why I couldn't just annotate the value of variables and then have the IDE evaluate the code as I write it to show how those variables are mutated. Always seemed like low-hanging fruit. It would also have the benefit of being a learning tool to help develop a mental model of how coding works.
Very neat! Generally speaking, it seems like testing and debugging, hasn't been getting as much love as other aspects of computer programmeing.
Debuggers still work pretty much the same as gdb, only now with an integrated UI, and there's so much room for improvement. This is a great start and can be taken much further with abilities to mimic complete sessions to easily reproduce bugs and run them again and again until fixed.
We see the same at the related testing automation field (disclaimer - checksum.ai founder). Same testing methods, same testing problems, only fancier packages.
Testing gets a lot of love, at least in theory. Much less in practice because it is a whole lot of extra work.
As for checksum.ai, is it some kind of fuzzer? I think fuzz-testing is under utilized, in fact, many of my colleagues don't even know that it exists, and I have never been on a project where it was done. Generally, fuzzing is done in the context of security, but I see no reason why it should be limited to it.
Not exactly (disclaimer - on the checksum team). Our goal is to generate end to end tests that mimic user behavior, using real production sessions to train our models. These end to end tests could then be run during development, effectively smoking out real potential bugs before your latest deployment makes it into production. Of course, part of the generated tests could involve fuzz testing if it makes sense (if there is a form field, input, etc.)
Sounds awesome, but I never get to use PyCharm's debugger any more because all our tests run in VS Code devcontainer with a docker-compose full of helper containers like databases.
Not very convinced yet. I would expect it to fail/not work quite quickly for cases which are "worth" debugging.
For the first example I would just place the breakpoint on the return. The highlighting is quite nice but I see not much need for the prediction which was probably hard to implement.
In languages like Java features like "hot code replace" exist. In a function like this if the result is not what I expect, I can simply change the code and restart the function without restarting the application. Not much need for prediction.
What I wished were more common is reverse debugging were you can step backwards.
Wow. Your description of Time Travel Debugging plus Edit and Continue working together sounds amazing. Definitely want to try that some day. I suspect the side-effect/impure-function-call detection JetBrains has built here will be key to understanding boundaries of where/when you can roll back, edit, and continue, and where/when you can't.
"Omniscient debugging" as seen in https://pernos.co/ is the holy grail. Time-travel debugging would also be great.
People are far too obsessed with static type checking. A lot of this time would be better spent invested in live debugging tools.
When I'm editing my code I want to see exactly what _value_ each variable contains, the type really doesn't matter so much. Wallaby.js/Console.ninja is a great example of this.
Good debugging, especially deterministic record/replay is usually complicated by the OS. I often wonder what an OS would look like if designed with debugging as a top priority.