|
|
|
About Computer Output History, Why did screens take over the world so late? |
|
Jan 17 2025, 10:31
|
Katajanmarja
Group: Gold Star Club
Posts: 651
Joined: 9-November 13
|
As of late, I’ve been thinking about Linux command line questions more than I usually do. An acquaintance once enlightened me that the reason behind the extreme brevity of POSIX commands is that at the time when they first originated, computer output was most often physical print-outs. Shorter commands meant one could spare lots of paper.
Now, there’s nothing surprising about the fact that ENIAC or the early-50s computers used output methods other than monitors: punched cards or tapes, lights, paper sheets with text, you name it.
But without much understanding of the technology, I am baffled to find out that the triumph of monitors did not begin before the 70s. Some sources cite the high cost of of monitors prior to that. But the television set had been invented during the interbellum, and TV broadcasts began to have national-level impact in some countries during the 50s. Considering this, it would seem obvious that a TV set become the primary output method for most new computers by 1965 if not earlier. The price of a black and white TV must have been negligible compared to the price of a minicomputer in the mid-60s.
I can imagine two reasons here: either the typical output of the time was such that people would rather keep a physical copy for future reference, or somehow it was really complicated or resource-hungry to produce an output signal compatible with a TV set. Or maybe there were some other factors I have no idea about?
Please share if you know about this.
--------------------
______________________________________________________________________ “...But now, in this world consumed with conflict and rage, being nice is about the most revolutionary and rebellious thing you can be.” — Professor Elemental [ danbooru.donmai.us] Have some, please have some more.
|
|
|
|
|
|
|
|
Jan 17 2025, 14:16
|
EsotericSatire
Group: Catgirl Camarilla
Posts: 12,359
Joined: 30-July 10
|
It was a combination of what you could do with the machines, the work they were replacing and whether the input/output methods were efficient for the task.
It took a while before computers had enough bit registers and memory to do anything on the computer that did not require just instant output.
The first program was run in 1948 but direct input to the computer via keyboard was not until 1956. I think the Atlas 1962 had an monitor but it was not until 1975 when Xerox Alto when you could move loads of functions to interacting on a monitor, with keyboard and mouse with a graphical user interface.
So it was once the computers became small enough that one person could operate it rather than a team and where you could move a bunch of activities and functions to the screen that it made a lot of sense.
They used to have back office loads of people updating ledgers, doing paper work, doing calculations etc which seems absurd these days.
By the time I tried punch cards, tape or switch operated computers in the 90s it was a massive pain in the ass to input anything. To the load the program took boxes and boxes on punch cards that were in order. It was like one person's job just to maintain the punch cards lol. Government bureaucracy likes to hold on to ways of doing things for ages.
--------------------
|
|
|
|
|
|
Jan 17 2025, 20:49
|
Katajanmarja
Group: Gold Star Club
Posts: 651
Joined: 9-November 13
|
Thank you for taking the time to give me an answer, Eso. QUOTE(EsotericSatire @ Jan 18 2025, 00:16) it was a massive pain in the ass to input anything
which seems absurd these days
I grew up playing with abandoned punch cards, although I found them pretty boring since it was hard to imagine how they had been used. Also, I was quite young when I got to listen to stories about how advanced students at a not-so-large European university could book some computer time for statistical analyses in the early 70s. I’ve always found some fun in reading about vacuum tube computers. What you are describing does not sound that absurd to me. QUOTE(EsotericSatire @ Jan 18 2025, 00:16) with keyboard and mouse with a graphical user interface
The era that I was thinking about while writing my question is certainly before using a mouse became a natural part of interacting with computers, although not necessarily before keyboard input became a common feature. Mind you, seeing someone use a DOS or UNIX computer (or an old Commodore, for that matter) with keyboard only wasn’t that rare in the very early 90s, even if mouses were spreading rapidly since the advent of the i386 plus Windows combo. QUOTE(EsotericSatire @ Jan 18 2025, 00:16) to do anything on the computer that did not require just instant output
This is the part that I am not getting. I’d imagine that once you had that huge pain in the ass doing the input, you’d much prefer to see a monitor preview of your instant output (unless it was just a one-liner). Would it not feel twice as bad to hold a large, meaningless paper printout in your hands if you had made a small input mistake?
--------------------
______________________________________________________________________ “...But now, in this world consumed with conflict and rage, being nice is about the most revolutionary and rebellious thing you can be.” — Professor Elemental [ danbooru.donmai.us] Have some, please have some more.
|
|
|
|
|
|
Jan 18 2025, 08:48
|
Gingiseph
Newcomer
Group: Members
Posts: 24
Joined: 20-September 22
|
QUOTE(Katajanmarja @ Jan 18 2025, 07:49)
I’d imagine that once you had that huge pain in the ass doing the input, you’d much prefer to see a monitor preview of your instant output (unless it was just a one-liner). Would it not feel twice as bad to hold a large, meaningless paper printout in your hands if you had made a small input mistake?
https://www.youtube.com/watch?v=L743MjJthHY I find it wonderful that we must talk about informatics with the eye of an historian. Today's things look obvious because we are biased by the success stories of specific products, think about graphical 3d accelerated cards. I hope the video above can help about this perspective (I'm not that old, just happened to be interested in tech and tech history)
|
|
|
|
|
|
Jan 20 2025, 21:05
|
Moonlight Rambler
Group: Gold Star Club
Posts: 6,375
Joined: 21-August 12
|
QUOTE(Katajanmarja @ Jan 17 2025, 13:31) I can imagine two reasons here: either the typical output of the time was such that people would rather keep a physical copy for future reference, or somehow it was really complicated or resource-hungry to produce an output signal compatible with a TV set. Or maybe there were some other factors I have no idea about?
Please share if you know about this. It was resource hunger. Firstly, to use an actual TV screen for input, you need a chip that can keep up with outputting a full screen of text every 60th or 50th of a second depending on where you are. Most terminals actually used higher scan rates than those of (non-french-819-line) television. For reference, a cathode ray tube television set works by sweeping across the surface of the screen while varying the intensity of the stimulation of various phosphor particles on the tube surface. it does so line by line, top to bottom. To draw your own picture on the display you have to control the voltage used to stimulate phosphors while keeping up with the drawing speed of the screen (which you cannot change very much). You also have to provide the pulses that tell the TV to move to the next line, or to move to the top of the screen again to start the next frame. Now let's think about displaying a character of text. How many pixels do you need to render a glyph and make it distinguishable from others? You can do probably about 8x8 pixels minimum for ASCII. Let's even be generous and assume we only need capital letters (or only lowercase letters) to save some space. We have 26 displayable alphabetic characters, plus space, ten digits, and punctuation (),./*!@#$%^&*"';:[]{}+=-_`~\| - so another 30 chars (for 56 total). Each of those at 8x8 is 64 * 56 = 3584 bits of memory, or 448 bytes. All of that has to be stored permanently in some way, but before the advent of ROM chips there weren't very many ways to do that. [ en.wikipedia.org] looking at wikipedia, somewhere between 1965 and 1969 we would have been able to store that much on a single chip. That's without considering any program code that we'd also have to store, or the processor itself. In actuality we'd probably use hardwired discrete logic chips instead of writing a program and using a central processing unit at this point in time (mid 1960's). And indeed, some of the first microprocessors were miniaturized implementations of architectures made for terminals out of discrete parts ([ en.wikipedia.org] wikipedia). And then, ignoring the processor problem for a moment (remember - microprocessors weren't a thing before the 1970's), remember that you have to draw this text to lines. And to do this you have to keep track of the text you're holding on the screen between lines. That or you use a "memory tube," but that is a further complication. So let's just say you use RAM. Your primary choice will be core memory until the Intel 1103 dynamic RAM chip in 1970. That or [ en.wikipedia.org] shift register memory. So let's be generous and use a small resolution screen, and say we are naively storing a 40 by 12 screen of text in ASCII. You will need 480 * 7 bits of RAM (since ASCII is a 7-bit character set and does not need the eighth bit of a byte). That's 3360 magnetic cores you need to wind, or four intel 1103 chips in 1970. Then you need to actually figure out how to draw that internal stored representation of the text to the screen. To do this, you would read an ASCII character out of your storage buffer memory (RAM), then you feed that number into some logic to look up the corresponding character's actual pixels in your ROM. You need to do this eight times per line of text, since a CRT TV screen needs to draw the top pixel first, then the next pixel, etc… for each character on the line for each textual line. By around 1970 there were chips like the Signetics 2513 that could handle at least some of that complexity for you, but until then you had to do the logic yourself. If we use a normal TV screen for this purpose, that sweeps across the screen once every 60-70 microseconds or so. At 40 8-pixel-wide characters a line, that's 320 pixels, which means a pixel has to be written out every 0.1875 microseconds. That means you are cycling through pixels at somewhere over one megahertz (one million times a second), without using a CPU. This is possible [ en.wikipedia.org] with logic gate chips from the mid 1960's; before then, it would need to be done with discrete transistors, which is also possible but quite costly. I could go on, and if you want me to I will. But basically it literally was not practical to draw a picture to a TV screen. A specialized tube, like those used for radar, that let a picture stay visible for longer before the phosphor fades, might have been usable and was used in some cases, but even those need a *lot* of logic behind them to decode pixels and such. The complexity of generating a display for an analogue TV signal cannot be understated. Which is why things like the magnavox odyssey are so primitive. And that's not even talking about the complexities of reading input from a keyboard or some other input device, or of scrolling or clearing the display or updating the position of the cursor. I drew up partial plans for a "TV typewriter" (which is basically what you're talking about) years ago and ran into most of these issues. Without a microprocessor, especially, it becomes quite a harrowing prospect but it's still a fun thought experiment. [ en.wikipedia.org] https://en.wikipedia.org/wiki/TV_Typewriter?useskin=monobookSome early CRT controller chip datasheets are also useful in understanding this, although experience reading a ton of datasheets in general until you start piecing bits of them together is probably a prerequisite for a more thorough understanding of this one. The 6845 is actually a very simple implementation as far as drawing a picture of a text buffer on a screen goes, so the block diagram on page 9 might help explain why this wasn't very easy to do without large-scale integration (like CPU's). Access via HTTP, not HTTPS: [ www.spic.net] http://www.spic.net/mirrors/misc/mindcandy...rola/mc6845.pdfTL;DR: controlling a television with a computer is actually a pretty complex task that requires a lot of digital logic and it was cost-prohibitive until the early 1970's when integrated circuits got better and we started getting things like CPU's and RAM chips. This post has been edited by Moonlight Rambler: Jan 20 2025, 23:11
--------------------
|
|
|
|
|
|
Jan 21 2025, 19:01
|
Katajanmarja
Group: Gold Star Club
Posts: 651
Joined: 9-November 13
|
QUOTE(Moonlight Rambler @ Jan 21 2025, 07:05) It was resource hunger.
Simple answer, and not especially surprising. How you went on to elaborate further helped me understand not only the main reason but also the rough timing of when computer monitors appeared. Thank you very, very much for your time. QUOTE(Moonlight Rambler @ Jan 21 2025, 07:05) I drew up partial plans for a "TV typewriter" (which is basically what you're talking about) years ago and ran into most of these issues.
I figured someone here in the tech chat would be able to give me the answer I was after, but not that said person would have that concrete first-hand knowledge. QUOTE(Moonlight Rambler @ Jan 21 2025, 07:05) I could go on, and if you want me to I will.
I believe I’ve gotten a picture as clear as possible with my nonexistent knowledge of engineering. Other people, raise your hands if you are willing to listen! I’m sure I would be if I understood electronics well enough. But instead... let me present another thought experiment related to this one! Suppose that, for some reason, we really want to save paper and produce a transistor computer with an output preview screen as early as possible – around 1965? Building a much larger unit is counterproductive. The mass of paper saved, say, within a year of active use should be more than the mass of extra components and casing needed, and that’s a minimum goal. So, let’s say we don’t use anything TV-like. We’ll use a liquid crystal display, the kind that pocket calculators used for decades. Oh wait, do those even exist in the 1960s? If not, let’s use a vacuum fluorescent display; those are already around for sure. Of course we’ll be generous and assume we’ll only need capital letters – 26 or maybe a few less. We might drop Q or J, for instance, if saving a crucial bit in the encoding depends on that. We’ll minimize punctuation as much as is viable, but I guess we cannot follow that path too far; if we cannot display programming languages, we’ve largely defeated the purpose. Ten digits, of course. ASCII has just been published in 1963 but is not such a universal standard that it would tie our hands. Yet I wonder how many code points total we might need for space and other "characters" not so obvious to a layman, such as backspace or newline. I’m afraid there’s no way we can do with less than 48 code points, and above 50 is quite likely despite all our efforts. (Feel free to present a calculation of the minimal needed if that sounds like fun!) Now, a VFL intended to show digits only must have seven segments per character, plus an eighth segment if we need decimal points. I’m pretty sure that will not suffice here. Frankly, I don’t know how much we could optimize, but I think fourteen segments is quite enough for our fifty-four-or-so code points. You can either present your optimization plan (I’d love to see it) or round up a bit. For example, 14 × 56 = 784 bits of memory, am I correct? Finally, let’s say that the maximum number of characters (including punctuation symbols, spaces, line breaks etc.) we can display at a time is 512. In addition, we’ll probably need a buffer memory of something like 2048 characters, or maybe 5120 characters at most. In this way, we can return to previously viewed material to some limited extent. Does this sound like it’s doable if we have a transistor minicomputer or a not-so-large mainframe, without either microprocessors or integrated circuits, or do we still have to wait for them to appear? We should indeed address the complexities of reading input from a keyboard. I guess we don’t need to show a cursor to the user; the only way to move on the screen is by inputting more characters or by deleting them with backspace. I guess some way to actively clear the display is needed, but it does not have to be especially user-friendly. Other than that, every time the display gets filled with characters, it’s emptied and we start at the top left corner again. This is where the buffer memory I mentioned, and commands to summon its contents, will come in handy. Sure, leaving one or two lines of text when emptying the display would be a nice feature and greatly improve the user experience, if not too resource-hungry.
--------------------
______________________________________________________________________ “...But now, in this world consumed with conflict and rage, being nice is about the most revolutionary and rebellious thing you can be.” — Professor Elemental [ danbooru.donmai.us] Have some, please have some more.
|
|
|
|
|
|
Jan 21 2025, 19:20
|
Moonlight Rambler
Group: Gold Star Club
Posts: 6,375
Joined: 21-August 12
|
An LCD would indeed simplify the issues of drawing a picture dramatically. But those didn't even really start to appear until the 1970's, and dot matrix ones took even longer. The circuitry to control the dot matrix screen of a game boy is not too complicated, though. I was actually thinking about this in the shower after I wrote up my post the other day. And according to Wikipedia, "The first multi-segment VFD was a 1967 Japanese single-digit, seven-segment device made by Ise Electronics Corporation." And that was for a presumably seven-segment (or fewer) digit, which can't draw the letter "W" for instance. So printing output on a teletype was just a much easier thing to do in general. As for a buffer of 2048 characters, assuming a character set of over 32 but under 64 chars, that's still 5 bits per character (2^5 = 64, 2^4 = 32). That might give us room for some control codes (something I didn't mention, but which are also important for a terminal to have). 2048 * 5 = 10240 magnetic cores to wind. And if we forego all integrated circuits, including 7400 series logic gate chips, that puts even more work in front of us. If we only are doing 512 chars, that's 2560 cores. So yeah, computer memory was very expensive. QUOTE(Katajanmarja @ Jan 21 2025, 22:01) Now, a VFL intended to show digits only must have seven segments per character, plus an eighth segment if we need decimal points. I’m pretty sure that will not suffice here. Frankly, I don’t know how much we could optimize, but I think fourteen segments is quite enough for our fifty-four-or-so code points. You can either present your optimization plan (I’d love to see it) or round up a bit. For example, 14 × 56 = 784 bits of memory, am I correct? that'd sort of work. What I think you're suggesting is reminiscent of the "[ en.wikipedia.org] binary coded decimal" approach, which is less dense but does work. the alternative would require decoding/"unpacking" logic to turn conventional binary digits into BCD so that they could be displayed, but that logic wouldn't be as awful as CRT driving circuitry. And it could be serialized probably, so that you only need one set of logic to do it for all numbers you display, and you just crunch them sequentially. if you sweep through them fast enough the phosphor might not have faded by the time you come back to draw it again. This post has been edited by Moonlight Rambler: Jan 21 2025, 20:09
--------------------
|
|
|
|
|
|
Jan 21 2025, 23:10
|
Katajanmarja
Group: Gold Star Club
Posts: 651
Joined: 9-November 13
|
QUOTE(Moonlight Rambler @ Jan 22 2025, 05:20) "The first multi-segment VFD was a 1967 Japanese single-digit, seven-segment device made by Ise Electronics Corporation." And that was for a presumably seven-segment (or fewer) digit, which can't draw the letter "W" for instance.
I’m pretty sure a 14-segment one would have been possible by then, had it had real applications. But you have been pretty good at convincing me that there were bottlenecks far worse. As a matter of fact, after having read your explanations and probably understood a percentage as well, I am in awe that they were able to produce the Tektronix 4010 terminal as early as 1972. That thing must have felt ultra cool to the people who actually got to use it. QUOTE(Moonlight Rambler @ Jan 22 2025, 05:20) some control codes (something I didn't mention, but which are also important for a terminal to have)
ASCII has so many of them that I wanted to mention at least a couple of those I thought the most unavoidable. I don’t really understand most control codes, and I’ve been under the impression that they originated in teleprinting rather than computing. QUOTE(Moonlight Rambler @ Jan 22 2025, 05:20) So yeah, computer memory was very expensive.
I guess this has not been stressed enough in popular presentations of computer history, which tend to revolve around the development of computational power and related components. I have indeed read bits of early memory solutions, but they have discussed what could be done and what couldn’t rather than how expensive it was. Perhaps not so surprising, describing an era when no one could have a computer for personal home use. I guess trying to explain 1960s computer memory to me is not so vastly different from me trying to explain a 3½-inch diskette to a kid whose family was using smartphones before he could talk: "It felt unreal. Suddenly, you could carry around amazing amounts of plaintext stored inside a little piece of plastic, instead of a bag full of file folders." "Okay, but why would anyone need to carry around a bag full of paper? And did I get you right, those things were barely usable for storing any music, let alone videos?!" QUOTE(Gingiseph @ Jan 18 2025, 18:48) https://www.youtube.com/watch?v=L743MjJthHY I hope the video above can help about this perspective The video did not really answer anything directly related to my question, but I’m very thankful to you for sharing. It was quite interesting and educational to watch. What surprised me the most was how fast the machine was at reading punched tapes, and without breaking them at that. Much less, but still somewhat, surprising facts were: 1. How long and time-consuming the manual part, using the switches, was. Not per se, not at all, but considering that they probably could have built a machine where some parts of that input, too, could have been given on punched tapes or cards. Or perhaps one could have, after a smaller amount of manual switching, used sets of "startup modules" – punched tiles made of hard plastic or something, so they would have endured daily use better. 2. That running the game itself actually took seventeen minutes instead of, say, five or so. (This was the smallest surprise by far.) 3. That somebody had taken the time to write that game and that somebody could apparently afford to spend computer time to play it back in the days. This post has been edited by Katajanmarja: Jan 21 2025, 23:21
--------------------
______________________________________________________________________ “...But now, in this world consumed with conflict and rage, being nice is about the most revolutionary and rebellious thing you can be.” — Professor Elemental [ danbooru.donmai.us] Have some, please have some more.
|
|
|
|
|
|
Jan 23 2025, 22:32
|
Moonlight Rambler
Group: Gold Star Club
Posts: 6,375
Joined: 21-August 12
|
There's an old file I've found floating around purporting to contain a somewhat humourous list of IBM jargon. One of the definitions is "Demonstration Application Program" - which is defined as "a game."
--------------------
|
|
|
1 User(s) are reading this topic (0 Guests and 0 Anonymous Users)
|
|
|
|
|