Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Once and Future Visual Programming Environment (techcrunch.com)
76 points by NelsonMinar on May 27, 2012 | hide | past | favorite | 51 comments


The biggest problem I have with tools like Visual Age and XCode is they want to replace my tool chain. I want it to tie into my tool chain comfortably (I really hope Emacsy[1] succeeds; I would contribute time to that project!).

As I look at Light Table, I really hope they learn a LOT from Emacs:

• Editing is not secondary. In many IDEs, it feels like actually writing code is an afterthought.

• Mice suck[2].

• Allow me to integrate external tools easily.

• Make everything easily customizable and replaceable.

1. http://www.kickstarter.com/projects/568774734/emacsy-an-embe...

2. They are fine for detail-oriented, fine-motor-control work. Coding is not one of those.


Mice suck[2].

2. They are fine for detail-oriented, fine-motor-control work. Coding is not one of those.

As someone who has written tens of thousands of lines of code with only a mouse, I would modify this statement to "Mice suck when you are primarily using a keyboard." Editing code is detail-oriented, fine-motor-control work, and in my experience a mouse is better than a touch screen for editing (with current interfaces), and worse for typing code in. I have not done any coding with a keyboard for many years, so I can't directly compare, but I think mouse, with my custom typing system, is easier for many situations. The problems come when you are using both mouse and keyboard.


I, too, would be interested to hear more as I'm having difficulty picturing what you mean. My mental picture is "iPad with a mouse", but there is obviously something missing. I'm dubious of the ability to code at least as efficiently with no keyboard as with, but that may be my lack of imagination.


The thing about coding is there's lots of thinking. I can write code efficiently because after a point, raw typing speed is not very important. I'm much less efficient at, say, copying out prose that's already written (unless my prediction engine is familiar with it...).

My software is like an on-screen keyboard with context-sensitive prediction, more sensible orientation and layout, and lots of shortcuts overloaded on the buttons, operated by the other mouse buttons and the scroll-wheel. So, for example, moving a word/page at a time, selecting lines/files, unindenting, etc. are just done by scrolling in the right place, control-A is middle-clicking A, and so on (the program just sends the appropriate keystrokes to make things happen). Selecting and toolbars/menus are fast because you're already using the mouse, and copy and paste is just select and middle click on Linux.

RSI basically forced me to develop the system (I can't use a physical keyboard, and no existing software was or is good enough), and I have been using it for all my typing since 2003. I'm now working on making a commercial version aimed primarily at touch screens (gestures will replace different mouse buttons).


Sounds interesting, and a step in the right direction for re-thinking the way people write software. I look forward to seeing something in the future.


Screenshots and more information would be very nice!


As far as looks go it's as you'd expect, a grid of letter and symbol buttons, and a list of predictions. I don't think it's a good idea to disclose the few interesting features until I have something on sale (right now they're a competitive advantage), so sorry, no screenshots :-P

I'm not sure that there's much more to it than what I've already described. I think what it's got going for it is refinement driven by nine years of dogfooding day in day out, and that refinement isn't easy to describe as it's just many tiny little choices.


Tell us more about your mouse-driven custom typing system, please. Sounds interesting.


Am I the only one whose attitude to mice changed when they got a Macbook Air? Having the touchpad so close to the keyboard makes it practically instantaneous to switch modes.

Navigate around and target code with the mouse, edit with keyboard -- it's super effective!


This is why I insist on having a trakpoint mouse. It's not simply close to the keyboard, it's part of it.


Agreed.

If a trackpoint was not available, I would prefer the following layout:

1) Keyboard with the number pad moved to the left and an integrated trackpad on the right side (centered slightly above home row).

OR (w/out integrated trackpad)

2) [For right handed folks] Keyboard with number pad on left and mouse on right.

The goal being to minimize the distance moving your fingers to the 'mouse zone' while still remaining on home row. The number pad on the right (for right handers) takes up too much space.


We can think of desktop modular PCs to small highly integrated ultra books: one allows me to buy my own graphics card, change the harddrive and RAM easily, add new cards, and interface with anything; the other provides some USB ports and calls it a day.

A more integrated experience necessarily means less compatibility with external tools. This can be a big turn off if you need those tools, but it can be a better experience if you don't, so its a trade off. We got a lot of integrated functionality with a Smalltalk IDE and no support for external tooling, they didn't even support the file system and instead went with an integrated image; the experience was completely different than and even diametrically opposed to Emacs. Modern IDEs like Eclipse and Visual Studio make more compromises between the two extremes.

I don't really disagree with your first two points. But mouse/menus has one important purpose: you don't have to remember a bazillion different key bindings to get work done. Ya, developers actually prefer that. Also, there should be some focus on the debugging experience, not just the editing experience, because we spend a lot of time there also.


If you want to change software development (as opposed to just making another cool IDE), you have to solve for the modular case. The leaders of the software developer pack are tinkerers and customizes and extenders. Give them a USB port, and you become a tool they put in their tool box, but you haven't changed anything.

On mice/menus, this is a place where emacs got things better than ANY other piece of software ever written: don't make me leave the editing environment. There is no reason navigating hierarchical information and commands needs to be done in menus. In emacs, I can navigate using all the same tools I've already learned for navigating my code, I can search for both command names and command context and execute the command in that context. Especially for something like Light Table, with it's mini-window-things, there is little need for menus. [a note: I'm discussing menus as a UI element, not the abstract notion of hierarchically structured commands implemented another way. Menus are ugly, out of context, and annoying to use. A good information architecture for commands is good.]

A focus on debugging would be good. A tool that could make debugging more efficient would be very valuable.


I think you are voicing an opinion and not facts. Emacs is the ultimate modular IDE, yet its not that popular! I was a big emacs fan in the past; but I've gotten used to graphical IDEs enough that I wouldn't consider going back. I'm sure many developers feel the same. Menus...you don't use them very often and I could imagine replacing that with an emacs like shell line with appropriate auto-completion for finding commands, but this would have its own usability problems. I think the Cloud9 team is getting this more right by simply being more minimal in their design.

Now, whether developers would prefer a more integrated experience is either unknown or known to be false (wrt current technology). One of the reasons my version of the Scala IDE failed was that I was focusing too much on the editing experience whereas the community really wanted integration with every little dinky Java tool you could imagine.


I'm totally expressing opinion. :) I believe it is opinion grounded in 20 years of experience, but it is opinion none the less. In that time, I've gone from emacs to Visual Studio to multiple java IDEs and finally landed back on emacs.

Emacs is not perfect (far from it!). There are many places where some graphics would go a long way towards improving the UI. Re-implementing emacs would be a bad idea, but learning what works (eg everything is a buffer, everything is easily [for a prorammer] extensible) will go a long way towards making an improved replacement.

My rant on menus is not about "it should be in the shell". I'm fine with the context being different. I just want my interactions to be the same. If I am searching for some code or searching for some editor command, the context is different but the interaction is the same. Every other piece of software makes the interaction different.

I point at emacs, but vi is another good example (just with a different implementation). And I don't want to disparage what the VisualAge folks did (I never saw the smalltalk stuff, but even the much-less-mature java stuff was amazing). But it didn't stick. Why is that? Once they answer that (I've given my opinion, but I could very easily be wrong), they'll be able to change the software development.

The kickstarter made it sound like Light Table would revolutionize programming. I'm all for that, but what I've seen so far doesn't do this. Instead, it makes a really cool tool. There is nothing wrong with that (I use a lot of them), and if that's the case, I'll shut up. :)

I'm passionate about this because, in the end, emacs, vi, and every IDE I've used are far from ideal. I want to uninstall emacs and never look back.



This struck me:

Third, screen real estate matters. The traditional “everything is a file” approach is wonderfully portable. You can build an environment for working with files even for a very small display. Heck, you can work with files if all you have is a line-mode terminal. But flexibly arranged code snippets and fully interactive graphical debuggers require a lot of pixels.

I'd go even further. Programming requires a lot of pixels. You want to look at as much code as possible no matter what. Maybe a 10' x 10' screen would be enough but otherwise, there's no limit to what I want for myself, not for some GUI eye-candy. As another post says Editing is not secondary but for me its a matter of never, ever crowding my precious screen real estate, ever.

But by that token, anytime someone tells you their application needs lots of screen real estate is a time when they're admitting they'll waste that real estate doing something I don't care about.


I wonder if that really is the case? Are there any useful experiments done comparing screen size?

I would have two questions:

1) how much screen size is ideal? 2) are there better ways to deal with information display than large screens?

I use laptops a lot. I have used 17" laptops and now I have a small one. I don't feel that productive on the small screen. That's subjective.

I use an external screen. A 30" screen. Am I more productive? On larger screens my main field of view is not much larger. I have to move the head to see things at the edges.

One thing which might be more productive is instant switching between contexts on the screen. For example switching between a debugger view and an editing sessions easily and fast. That might be better on a certain screen size, than having both the debug and edit version on the same screen - but each smaller. Sometimes programmers add another monitor, so they can put these things on different monitors. But is it really better to look to another monitor, instead of just switching the context on the one you are currently looking at?


You know, I really don't know whether more screen space makes you more productive. I actually suspect that question matter less than you'd think in terms of the question at hand (whether "visual" workspace will be accepted by programmers).

I was simply saying that I and I suspect many typical programmers want this screen space and will become annoyed at programs which stand against it. IE, put a widget between me and the data I really want and you'll soon see me not using your application. Will this preference make me more or less productive? That's a further question.

On the other hand, I happen to think a second monitor is a great recipe for disabling neck injuries. But that is my rather particular view based on my studies of effective and ineffective postures.


  > You want to look at as much code as possible
I'm a big-screen skeptic. An art in programming is abstraction, enabling you to grasp complexity in small pieces. Seeing more doesn't enable me to grasp more. (There are non-code exceptions: rendered output, docs, etc)


On the other hand, is it possible that the slew of visually-driven programming environments we've seen in the last 30 years or so, running the gamut from Visual Age to Rational Rose, were novel, innovative, and helpful in many ways, but ultimately just not as effective as a coder that knows what she's doing with a lightweight editor and increasingly speedy runtimes on faster and faster CPUs?


That's possible.

But "why" is the sixty-four dollar question.

I think it is reasonable to say these environments gave some programmers the information they thought they needed. But the bare text was more useful. Text and GUI dook-hickies are both pixel-based information. What makes one superior to the other.


This is interesting:

the closest anyone has ever gotten to creating a full dynamic environment for a C-language platform is Alexia Massalin’s Synthesis operating system. If you are a programmer of any kind, I’ll wager that Alexia’s dissertation will blow your mind

There have to be people on HN who know about this. Tell us more!

Wikipedia [1] says that the Synthesis kernel relied heavily on self-modifying code and that adoption of its techniques was inhibited by its being written in assembly language. That makes sense; it probably relied on the code=data aspect of assembly language, something that (Forth aside) you mostly don't get back until you've left the lower levels well behind.

[1] http://en.wikipedia.org/wiki/Synthesis_kernel#Massalin.27s_S...


He wrote special assembly language templates that allowed constant propagation, constant folding and code inlining at run time quickly (at a time when state of the art machines ran around 33MHz to 50MHz).

The actual thesis is mindblowing (I'm currently reading it now). Not only does it create kernel syscalls on the fly, but specialized interrupt handlers that handle only the devices required (if a new device requires an interrupt handler because it's being used, a new interrupt handler with the existing code and the new code is generated).

The generalized system call (TRAP #15, since his machine used the Motorola 68030) was fast, but a user-mode program could designate up to fifteen system calls (per thread) to be called directly via TRAP #0 to TRAP #14. The end result---a system call was about twice as expensive as a native subroutine call (whereas in contemporary Unix systems it was more like 40x-100x times as expensive).

Another reason why the technique of run-time code generation isn't used that much (and is mentioned in the thesis) is that instruction caches are hideously expensive to flush.

But given stuff like TCCBoot (http://bellard.org/tcc/tccboot.html) I think this could be a viable approach to kernels.

(Edit---grammar)


Her (at the time, his) kernel had several interesting features. One was run-time code generation. Another was an object system much less indirect than the standard C++ vtable approach; you know,

    mov (%eax), %eax
    mov 16(%eax), %eax
    call *%eax
That involves two memory fetches.

One step less indirect would be to copy the vtable into every object, which of course means that every object could have a separate vtable. Synthesis's quajects are two steps more indirect: every object has a separate copy of every method. So a method call looked like (the 68030 equivalent of)

    add $32, %eax
    call *%eax
No memory fetches.

The fact that every object had its own copy of each method meant that the method could be optimized for that object. If it was more than a few instructions long, you'd want to factor the main body of the method into a regular function, but many methods are not.

One of the interesting features of Quajects is that you can instantiate a quaject not only with its own data but with its own "callouts". In a traditional object system, wherever you have a simple little object, that object implicitly carries within it this whole pyramid of things that it calls, and things that those things call, and so on — allocators, file buffers, system calls, and so on. More flexible languages like Ruby have facilities for temporarily replacing some part of that pyramid, say, for testing. But each call out of a quaject is just as overridable as an instance variable. When you instantiate the quaject, you supply callouts to it.

There are several more interesting features in Synthesis, including lock-free synchronization (not a new invention at the time, but popularized by Synthesis) and PLL scheduling for soft-real-time response. Also, the dissertation is a rollicking good read.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.4...


Is it just my phone or does this Wikipedia link forward iOS devices to the self-modifying code article?

Edit: Not just my phone. But I tend to expect Wikipedia articles to have a title matching the URL unless there's a "redirected from..." subheading and it's just the mobile site that doesn't have that subheading.


This article has a host of interesting points and references which are great to read, but the one-upmanship of 'its probably been done before' (which seems omnipresent when discussing visual programming) really annoys me, because the upshot of it is to discourage anyone who is interested in this kind of work from pursuing it.

My point is that whilst lots of ideas have been tested before, what hasn't been 'done' is a a visual programming environment that really takes off. And when/if that happens, it will be worth more than 1000 well written research papers and PARC prototypes, and its creators will deserve more success than 1000 academics who have tested some idea (or merely seen someone else try) but never seen it through to widespread adoption....


I came away from it with a very different message: This has been tried many times before, by teams of brilliant people, and you can still find their notes.

Half the problem is that every N years, people bravely try it again...and start from zero, because they didn't do much research. History is full of great ideas that didn't take off because computers weren't fast enough, because touchscreens weren't popular yet, because wireless didn't exist, etc. They often used different names, but there are many brilliant ideas waiting for another dance.


Yes, the article is supportive of new work (like lighttable), but its still very heavy on "we've done it before". The title "Hey Kids, Get Off My Lawn", and the closing sentence "when that new thing comes along, I’ll tell you we built an early version of it at the Media Lab..." pretty much sum up this sentiment that irks me.


if you're interested in visual programming environments and try to look up prior efforts at all, you WILL find this sentiment to be true. It's somewhat depressing actually. You'll not just find all these cool sounding environments from the 80s that seemed light years ahead of today (or even something like Light Table), but you'll also find studies on the difficulties people faced with them and why they failed.

Text is not an easy thing to dislodge from the programmer's toolset.

My personal opinion keeps switching between "Text is basic and elemental, so it's the most natural way to represent code and therefore is the preferred" and "We havent built tools good enough or representations revolutionary enough (or scalable enough) and that's why text remains preferred".

I can see how someone who's built a non-textual programming environment can come to a "been there, done that" kind of attitude. Just take his enthusiasm at new attempts at the problem as a more pertinent response and ignore the rest.


Another example: APL used a bunch of mnemonic symbols for operators -- for example, the "reverse" operator is a circle with a vertical line through it. Mirror image. This made sense back in 1962, with Mad Men-era Selectric typewriters* , but it never sat comfortably with ASCII. Very dense, expressive code. Like thinking in kanji, after a lifetime of alphabetic hackery.

* http://en.wikipedia.org/wiki/Selectric

The idea has many things going for it, and is worth another go in the era of touchpads and graphic interfaces. APL has always had a die-hard following, and intriguing ASCII offshoots such as J and K, but it's time for a second chance.

The APLs have many other interesting aspects (they're all about data-parallelism and functional programming of a different flavor than Lisp and Haskell), but to my knowledge nobody has sincerely retried the glyph-based language thing again.


You need to learn from the mistakes of the past. Urs Holzle at Google knows more about these systems (he did Self) than most, yet he's not advocating their use either. And certainly Google has the resources to try.

The problem with Smalltalk and Lisp IDEs was no one was using those languages. The other problem is that it didn't help that much when writing code. It helped a lot to navigate and learn a new code base, but after that you just want to stare at text. That's why SLIME is the preferred IDE for Lisp today.

Visual Studio is making steady incremental progress with visual programming. Their Workflow graphical environment is useful for building flowcharts and state machines. Their DB viewers can help construct SQL queries. And GUI builders are visual programming environments. You get help whenever you write a method, it shows all the method signatures, etc. VS and IBM's Rational IDE generates code from UML models. You have a special environment for creating unit tests. I haven't used Eclipse, but they must do many of these things too.

So only some of the Emacs and Vim guys are pining for visual programming. Everyone else is using an adequate tool already.


There are plenty of Visual Programming paradigms in development today. The XCode Interface Builder, and Visual Studio's stuff as well, handle the tricky layout and placements before it is converted to text for the compiler. But the rest of the development is done through purely textual coding as the visual paradigm becomes exponentially more difficult to manage as the project "grows". So what happens is a hybrid.


The tools mentioned have very little with GUI design and a lot with being able to explore and modify the whole system - not just the program - while everything runs.

Do the Squeak by Example tutorial and come back when you finish it.


No. ;)

Edit: I just got down voted for replying "No." to a comment on HN. Hah! That's this site in a nutshell for you.


I think it's a matter of attitude. Your response didn't add anything to the discussion, you seemed to refuse perfectly valid advice (by your message, you didn't read - or understand - the original article and the advice I gave you, while given in a more than a little condescending tone - sorry for that - would enlighten you) and people around here don't really welcome that.

I regard it as a feature rather than a bug.


> Do the (thing above) and come back when you finish it.

Do you even know what advice(1) is? That's not advice as I understand it. That's an order. You're giving orders to a complete strange on a web forum. Think about the intelligence behind that for a second.

My comment on this site was referencing the comment by Paul Topping(2) on the TC link. His opinion, and one I agree with, is that Visual Languages become too cumbersome for anything but small projects, and my example is using Visual Programming for layout. The XIB's are XML in XCode.

1) http://dictionary.reference.com/browse/advice

2) http://techcrunch.com/2012/05/27/hey-kids-get-off-my-lawn-th...


I'd be interested to hear, from people who have experience with these older systems, why they didn't take off.

Maybe it's that they were developed almost pre-web? I was using the web in 1994 on windows 3.1 but certainly never heard of smalltalk. I think it must have been the Microsoft ecosystem and then later the Linux ecosystem that kept "better" tools in the shadows. In 1994 windows was a fantastic upgrade from DOS. You could not just have multiple windows, but multitask!

These days it seems like programming platforms like node.js can take off astonishingly quickly because of faster and faster dissemination on the web. It's not just more adoption, but more contribution too.


Most languages were proprietary and expensive through the 80s. When C took off it wiped out (or much reduced) the use of expensive proprietary languages. C was free so a lot of people learned it in university and wanted to use it on the job later because they were used to it. Since it was free it was easy to introduce to companies. It established a base and then when Java came out in 1995, also free but a higher level language than C, it took off like a rocket. Now the idea of expensive proprietary languages seems absurd.

Many tools have been developed that support C-family languages and most programmers depend on those tools. It's a huge job to move out of that and into an image-based language like Smalltalk because everything changes at once. That's a major reason why new frameworks like node.js or Rails can take off quickly now: They fit into most programmers' normal workflow so they are easy to adopt.

I worked in an image-based language in the 80s and it drove me crazy that I didn't have diff and grep. I was glad to leave that language when I moved on to a job that used a conventional text-file language. I'm sure I'm not the only one who found it awkward.


Execution. Execution. Execution.

Really. Ideas are 1% of the work. Doing the idea is 9%. Doing it well is the other 90%. Different things matter every time, that's what makes it so hard.


And the missing 'other' 100% is getting anyone to take notice!

One can have the most wonderful, state-of-the-art, mature product - that remains largely unknown by the 'majority' because... well, why?? I still can't figure out this last bit...

(Modern Smalltalk is, unfortunately, a perfect example of this.)


I toss the "take notice" into the execution pile. That's why it's 90% of the work. That includes understanding why brands fail regardless of the quality of the good or service. It includes understanding the pains of your userbase, the expectations they have going in, and what you have to do in order for them to converge and spread your technology in an organic manner.

It is the holistic picture that constitutes success. That's the "other 90%" and it's a black art.


This is not useful. Great execution on a bad idea still results in a bad experience. Truly, you need both: great ideas and great execution, you can't just brush ideas off to 1% of the work and expect to win.


Surely. I think I'm a bit bitter from all of the would-be-CEOs that have a wonderful idea and just need the rock-star programmer to um, do the design, engineering, QA, customer feedback loop, product definition, you know, the actual work.

Let's take youtube; "I want to watch a video on demand". Really? What a brilliant 1920s style idea I've never heard before. It was how they pulled it off that matters.

What about an iphone/android? "I want one device that works as a PDA and a phone that does everything I need." Really? Never heard that idea before.

Every now and then someone comes along with a truly innovative and truly brilliant idea; and may God Help Their Soul.

I've personally suffered from being ahead of the curve many times. Or was it bad execution? In 2002, I had this AIM bot that you would send small messages too, then it would be posted on a website under your AIM name. You could follow your AIM buddies and see their messages.

Yeah, it's called twitter. I did it on top of AIM in 2002. In 2003 I did another bot that would proxy messages between anonymous users. Yep, you heard of that too, it's called Omegle.

Then in 2003 I did a multiplayer extension to an NES emulator that utilized DCC irc connections so that you could in an Fserve style, play random people well known games in IRC.

Sounds familiar? not yet. There will be a node.js/socket.io version of this soon by someone, I know it.

So yeah, even with novel ideas, I still think the 1/9/90 rule plays. In the 90 here is the very important element of timing, along with, of course, target audience.

So I'm not trying to brush off the necessity of true ideas, just trying to minimize the importance of it. Look around, many of the successful things that we use are totally void of true innovation (as in, I'm not using the First One). The desk I sit at, the monitor I use, the keyboard I type on. They are just decently executions of old ideas.


The Microsoft dev tools were important for PCs, but they weren't the only game in town. Lots of people were still using Turbo Pascal and later Delphi -- there weren't a lot of tools that could generate Windows binaries yet. Some people were using DBASE, Foxpro, etc. Watcom C targeted 32-bit DOS extenders which were important for games.

The most important thing was (and still is) that you could easily integrate your the OS and UI framework of choice. Smalltalk couldn't provide that because it tried to replace all that with its own code -- which the implementor would probably further tweak into unrecognizability.

UNIX of course was its own world at this point, but it also was driven by pragmatics. You aren't going to write your MUD in Lisp if its going to thrash your swapfile, or if your friend down the hall can't hack on it with you.

It's also questionable whether these tools were "better". The one time I tried to use VisualAge all I could think about was getting back into my comfy text editor. I am hopeful however that we can eventually improve on the age-old compile/edit/debug cycle.


Most of this predates PCs and Microsoft dev tools (unless we are talking about Microsoft BASIC) by a good couple years. When Smalltalk was conceived there was no OS or GUI framework to integrate because all the OS did on a personal computer was file IO and start programs.


True enough, and there's an interesting history of Smalltalk on early Apple computers that I wasn't aware of: http://basalgangster.macgui.com/RetroMacComputing/The_Long_V... Makes sense, since you can see the Smalltalk influence today in Objective-C.


One interesting reference on the subject is the final seection of Richard Gabriel's Patterns of Software[1].

And just to play devil's advocate: both Smalltalk and Lisp were designed ex novo to do things that had never been done before on machines that didn't yet exist. In contrast, node.js was built on top of pre-existing languages and tools already familiar to its target audience, to solve broadly similar problems in familiar environments.

Not that there's anything wrong with that — but it's hardly a fair comparison.

[1] http://www.dreamsongs.com/Files/PatternsOfSoftware.pdf


VisualAge for Java is another example of a Smalltalk-inspired visual programming evnrionment that never quite caught on. It had a lot of cool advanced IDE features, particularly for its time. It was also slow and kind of crashy, a complex program in its own.


VisualAge Smalltalk begot VisualAge Java which then became Eclipse, although over the transition some things changed a lot (more interop, no images). I think in turn IBM got VisualAge from their acquisition of OTI; Dave Thomas has the history, you could ask him if you ever attend OOPSLA .





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: