The Project Gutenberg eBook ofOn-Line Data-Acquisition Systems in Nuclear Physics, 1969

The Project Gutenberg eBook ofOn-Line Data-Acquisition Systems in Nuclear Physics, 1969This ebook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this ebook or online atwww.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook.Title: On-Line Data-Acquisition Systems in Nuclear Physics, 1969Author: National Research Council . Ad Hoc Panel on On-line Computers in Nuclear ResearchRelease date: April 29, 2013 [eBook #42613]Most recently updated: October 23, 2024Language: EnglishCredits: Produced by Mark C. Orton, Paul Marshall and the OnlineDistributed Proofreading Team at http://www.pgdp.net*** START OF THE PROJECT GUTENBERG EBOOK ON-LINE DATA-ACQUISITION SYSTEMS IN NUCLEAR PHYSICS, 1969 ***

This ebook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this ebook or online atwww.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook.

Title: On-Line Data-Acquisition Systems in Nuclear Physics, 1969Author: National Research Council . Ad Hoc Panel on On-line Computers in Nuclear ResearchRelease date: April 29, 2013 [eBook #42613]Most recently updated: October 23, 2024Language: EnglishCredits: Produced by Mark C. Orton, Paul Marshall and the OnlineDistributed Proofreading Team at http://www.pgdp.net

Title: On-Line Data-Acquisition Systems in Nuclear Physics, 1969

Author: National Research Council . Ad Hoc Panel on On-line Computers in Nuclear Research

Author: National Research Council . Ad Hoc Panel on On-line Computers in Nuclear Research

Release date: April 29, 2013 [eBook #42613]Most recently updated: October 23, 2024

Language: English

Credits: Produced by Mark C. Orton, Paul Marshall and the OnlineDistributed Proofreading Team at http://www.pgdp.net

*** START OF THE PROJECT GUTENBERG EBOOK ON-LINE DATA-ACQUISITION SYSTEMS IN NUCLEAR PHYSICS, 1969 ***

Ad Hoc Panel on On-Line Computers in Nuclear ResearchCommittee on Nuclear ScienceNational Research Council

NATIONAL ACADEMY OF SCIENCESWashington, D.C.1970

This is a report of work under Contract NSF-C310, T.O. 47 between the National Science Foundation and the National Academy of Sciences and under Contract AT(49-1)3236 between the U.S. Atomic Energy Commission and the National Academy of Sciences.Available fromCommittee on Nuclear Science2101 Constitution AvenueWashington, D.C. 20418

This is a report of work under Contract NSF-C310, T.O. 47 between the National Science Foundation and the National Academy of Sciences and under Contract AT(49-1)3236 between the U.S. Atomic Energy Commission and the National Academy of Sciences.

Available fromCommittee on Nuclear Science2101 Constitution AvenueWashington, D.C. 20418

The first digital electronic device employed to collect nuclear data was the binary electronic counter (scaler) of the 1930's. In the next decade single and multichannel pulse-height analyzers appeared, still using vacuum tubes. In the 1950's the development of multichannel analyzers continued vigorously, with vast improvement of the analog-to-digital converter sections and with the introduction of computer-type memories, based first on acoustic delay lines and a short time later on ferrite cores. The replacement of vacuum tubes by transistors beginning in the latter half of the 1950's accelerated the pace of development and application of all types of electronic laboratory instruments.

The 1960's was the decade of the computer. Before the 1960's almost no on-line computers were used in nuclear research, but since about 1962 the computer has moved into the nuclear laboratory. It provides the research worker with an immensely flexible, powerful, and accurate tool capable of raising the research output of a laboratory while eliminating the most tedious part of the experimental work.

The phenomenal speed of development of computer hardware, software, and methodology contributes to the difficulty experienced by everybody involved in decision-making processes regarding data-acquisition systems. Since the cost of a computer system is often a sizable fraction of the total cost of a new laboratory, there is urgent need for a set of guiding rules or principles for use by a laboratory director planning a system, a reviewer going over a proposal for support, or a potential funding agency considering proposals and reviews. The purpose of this report is to assist in filling this need. The material presented is current through 1969. Although we deal with a field that is developing rapidly, we hope that a substantial portion of the material covered will have long-lasting value.

The report was prepared by the Ad Hoc Panel on On-Line Computers in Nuclear Research of the Committee on Nuclear Science, National Research Council. Appointed in March 1968, the Panel first met in Washington, D.C., on April 22, 1968.

The original members of the Panel were H. W. Fulbright, H. L. Gelernter, L. J. Lidofsky, D. Ophir (through late 1968), L. B. Robinson, and M. W. Sachs. In June 1968, this group prepared an interim report. L. J. Lidofsky was on sabbatical leave in Europe and therefore could not participate during the academic year 1968-1969. Early in 1969 J. F. Mollenauer and J. Hahn joined the Panel.

The Panel has reviewed the present state of the field and has attempted to anticipate future needs. We have agreed on many important matters, including especially useful design features for computers employed in data acquisition, as well as types of organization of data-acquisition systems suitable for various purposes, types of software that manufacturers should supply, and approximate costs of systems, and we present a number of recommendations in these areas. However, the Panel makes no recommendation on standards for computer hardware, such as logic levels and polarities, because of a conviction that these are now rapidly being established as a result of sound engineering progress and the pressure of economic competition in the fast-moving computer business.

Throughout this report we have expressed opinions based on our own experience and on the best information at our disposal. The nature of the report seemed to demand some discussion of properties of specific computers by name. We have tried to be neither misleading nor unjust in our evaluations.

We wish to thank everyone who has aided us, especially P. W. McDaniel, C. V. Smith, and G. Rogosa of the U.S. Atomic Energy Commission and the many scientists in AEC-and NSF-sponsored laboratories who supplied the basic data on which the economic survey chapter is based. We are indebted to several members of the staff of the Department of Physics and Astronomy of the University of Rochester for assistance in the preparation of the manuscript, especially Mrs. Brignall and Mrs. Hughes. We also received initial directions and many helpful suggestions from D. A. Bromley, Chairman of the Committee on Nuclear Science, F. S. Goulding, Chairman of the Subcommittee on Instrumentation and Methods, W. S. Rodney and P. Donovan of the National Science Foundation, and Charles K. Reed, Executive Secretary of the Committee on Nuclear Science.

On-line data-acquisition computer systems are made in a wide range of types and sizes. In all cases at least one electronic computer is involved—a stored-program machine—because wired-program devices such as pulse-height analyzers are not considered to be computers. The rest of the system typically consists of input/output (I/O) devices such as analog-to-digital converters (ADC's), printers, cathode-ray oscilloscopes, plotters, and control devices, which may include, in addition to the console typewriter, switch boxes to simplify the control of special types of operations and perhaps a set of logic circuits associated with the input system, used to provide preliminary selection of incoming data. In a small but increasing number of cases a computer is seen dedicated entirely to a "process-control" application such as the automatic adjustment of the shim coils of a variable-energy cyclotron or the control of data acquisition in a nuclear-scattering experiment, adjustments such as changing the angle of observation being made essentially under direct automatic control of the computer. The smallest on-line systems use the smallest commercially available computers; the largest use computers bigger than those which until recently served most computing centers. Large systems sometimes include one or more satellite computers. The cost of individual systems ranges from $25,000 to $1,000,000, approximately. The total cost of computer systems in low-energy nuclear laboratories is estimated by now to have reached about $20,000,000. (There has been a larger expenditure in the high-energy nuclear field, where computer systems have been employed extensively for some years longer and where experiments are so expensive that the economic advantages of computer use were quickly recognized.)

We first list the main uses to which on-line computer systems have been put. We start with the simple operations, which we call Class 1.

Class 1 operations:

a. Accepting digital data from external devices and storing it in computer memory.

b. Preliminary processing of incoming data, on-line, before storage. This usually involves only operations of logic and simple arithmetic.

c. Controlling the presentation of data via cathode-ray oscilloscope or typewriter, often for the purpose of monitoring the progress of an experiment.

d. Controlling the recording of digital data on magnetic tape, paper tape, or other storage medium.

e. Controlling an incremental plotter.

f. Controlling the output of large quantities of data via a line printer.

g. Transmission of quantities of data between two computers or between a computer and a pulse-height analyzer or other device having a magnetic core memory.

Several operations of intermediate complexity we will label Class 2.

Class 2 operations:

a. Processing of data already accumulated and stored either in memory or on tape or other medium (off-line processing). This data reduction is often more complicated and lengthy than the preliminary on-line processing referred to in (Class 1b).

b. Calculation of information required by the experimenter during the experiment, for example, kinematics tables and particle energies corresponding to field strengths in analyzer magnets.

c. Process-control operations, in which the computer directs or regulates a sequence of events in an experiment. Under program control the computer monitors the course of the experiment and supplies signals that cause automatic changes in experimental conditions, such as starting and stopping times of event counting, angles of observation of scattered particles, and accelerator energies. Such applicationsare designed to relieve the experimenter of unnecessary labor and to reduce the probability of error in routine operations.

Our final class involves even more complex calculations.

Class 3 operations:

a. Complicated treatment of reduced data, including least squares and curve fitting.

b. Large-scale calculations such as those required for the evaluation of theoretical nuclear scattering and reaction cross sections, e.g., DWBA calculations, which may each require running times of the order of minutes, even at a modern computing center.

Apparently Class 3 operations do not always have to be done during the course of the experiment; in fact, they can in most cases be carried out later, leisurely, at the local computing center. Nonetheless, calculations of the first type, and to a lesser extent the second, are currently being done at laboratories having large, powerful computers in their on-line data-acquisition systems.

Because computers have proved useful in so many fields, many varieties are now on the market, quite a few of them having properties highly suitable for nuclear-data acquisition. The properties particularly useful are, first, the ease with which a great variety of external input and output devices can be attached (interfaced to the computer); second, provisions for rapid, efficient response to interrupt signals from external devices; and third, usually a means of transferring data from external devices directly into blocks of memory without use of the central processor, the transfer possibly requiring only a single memory cycle per word. (This is referred to as direct memory access through a direct data channel.)

Several types of small computers have appeared on the market during the past year, some having 8-bit words, but they are too small for general data-acquisition use, although valuablesfor special applications. For present purposes, the smallest useful machines have a minimum memory size of 4096 (4k) 12-bit words, which can usually be enlarged to 32k words by the addition of memory modules, while the larger machines have minimum memories of at least 8k, with provision for expansion to several hundred k. Regardless of their size, the machines of the present generation all have memory cycle times around 1 or 2 µsec.

Before proceeding with the discussion it is convenient to find a simple scheme for classifying computers. The scheme adopted here is to divide them into three loosely defined classes—small, medium, and large—essentially on the basis of the properties of the basic central processors:

SmallWord length 12 to 18 bitsUseful memory size 4kNumber of bits in instruction 3 or 4Floating-point hardware orally offeredApproximate cost range $8500 to $40,000

MediumWord length 16 to 24 bitsUseful memory size 8 to 16kNumber of bits in instruction 4 to 6Floating-point hardware option sometimes offeredApproximate cost range $30,000 to $120,000

LargeWord length 32 to 48 bitsUseful memory size at least 16kNumber of bits in instruction 7 or moreFloating-point hardwareApproximate cost range $150,000 or more

Computers do not fall neatly into these three classifications, especially since manufacturers offer many optional features; therefore, some argument about the assignment of a particular machine to one or the other class is possible. This is especially true with respect to the small and medium types. The properties of a large number of small and medium-sized computers are given in Appendix A. Information on larger machines can be found in the Adams AssociatesComputer Characteristics Quarterly.

Having classified both the computers and the jobs that they may be called on to do, we now ask this question: How suitable is each of the three types of computers for each of the three classes of jobs, given that in every case the acquisition system consists of a single computer coupled to all necessary input and output equipment?

We start with the large computer system. All classes of jobs can be handled by this powerful system. However, we should question the wisdom of assembling a system based on a large machine unless a substantial amount of numerical calculating is anticipated, because the essential advantage of the large computer—the advantage that costs so much—is its capacity for rapidly executing highly accurate floating-point arithmetical operations.

The small computer system can handle the jobs of data acceptance, data manipulation, and output characteristic of the simple Class 1 operations, but they are suitable for very few jobs involving floating-point arithmetic. In fact, we must usually be skeptical about the use of small machines for any of the Class 2 operations except those of the process-control type, which in many cases would involve little if any arithmetic. (Process-control applications have been rather few to date, but a rapid increase can be expected in this field, especially because of the convenience and low cost of small modern computers.) It is apparent that these machines have been designed as economical instruments specifically intended to handle Class 1 jobs. The smallest word length of a machine in this group, 12 bits, is sufficient for storing in one word the output of a 4096-channel ADC unit, but it is not quite so convenient for handling the output of a typical scaler, which would likely require the use of two words. The capability of even a small computer system to convert experimental information into digital form, to transfer it into memory, to manipulate it, and to present it for inspection in a digested, convenient form, all at a high rateand essentially without error, is of immense value to an experimenter who has to cope with the abundant outflow of data from a modern nuclear experiment.

The capabilities of medium-sized computers are less clear. These machines are superior to the small ones mainly in two respects: they have a more flexible command structure (i.e., they have a larger set of wired-in operations), and, usually, they have a longer word length. These features make them easier to program and give them a limited, but important, capability to execute floating-point operations sufficiently quickly and accurately for many purposes, even though these operations must in most cases be programmed, in the absence of floating-point hardware. We can reasonably conclude that the medium-sized machines will serve for any use listed in Classes 1 and 2. Certain simpler calculations of Class 3a are also expected to prove feasible, but few, if any, of those of Class 3b.

The value of any feature depends on its need in the application involved; therefore detailed, absolute statements regarding each characteristic usually cannot be made. However, the Panel has discussed various features at some length, and we present here some general comments on the pros and cons of these features. Among the items discussed are some, such as word length and cycle time, that represent basic, inherent properties of the computer; while a great many others, such as priority interrupts, are customarily offered as options.

The shorter the word length the cheaper the hardware, generally speaking, but the less the accuracy in calculations unless multiple precision is used. For example, although the 12-bit words of the PDP-8 match the accuracy of data from most ADC's, they are too small not to match the output data frommost counters; furthermore, indirect addressing is often required because a single word is too short to include both the operation code and the absolute address of a memory location. Apart from addressing considerations, a 12-bit word is too small for many uses, e.g., in general-purpose pulse-height analyzer applications where 16 bits or, better, 18 bits should be considered a minimum. Fortran programs for numerical calculations are in general best run on machines having at least 32-bit words, although 24-bit words are usually acceptable here when double precision can be used.

In general the more words that a system can retain the better; but the greater the memory, the greater the expense. The cost must be weighed against the need. For simple handling of data, a 4k memory may be adequate, but in a large shared-time general-purpose machine a 16k or greater memory is essential. In the latter case, the resident shared-time monitor will probably occupy at least 6k of the memory, so with a 16k memory only 10k would be left accessible to users, and experience has shown that this much can be taken up completely by one user compiling a Fortran IV program. A 4k memory is adequate for many process-control applications, but it is too small for many other applications such as general-purpose pulse-height analyzer use, where an 8k memory is highly desirable. Adding a supplemental rotating memory device (disk or drum), at a cost per word about 1 percent that of core storage, is often preferable to adding core memory.See 6 below.

For most purposes the typical memory cycle time of 1 to 2 µsec is quite adequate. Some of the modern computers have cycle times under 1 µsec.

These allow sequential depositing of digital data from external devices directly into blocks of computer memory without intervention of the central processor (direct memory access, DMA). Such input may require only one computer cycle perword, that being the next cycle after the one during which the interrupt signal arrives. This is the fastest means of getting data into memory, but it requires more external hardware and more complex interfacing than input through an accumulator of the central processor. Most data-acquisition machines provide both possibilities. Direct data channels can be valuable for interfacing to magnetic disks, drums, and tapes.

These can be very useful. They may cost as little as $125 each, depending on the machine, and can be used to reduce greatly the overhead running time losses of the computer. In complicated data-taking applications many interrupt lines are desirable; 8 to 16 priority levels are generally adequate. The usual Fortran compiler cannot compile programs that respond properly to interrupts, although a relocatable object code generated by the compiler can always be assembled with a machine-language subroutine designed to handle interrupts. Enlargement of Fortran compilers for data-acquisition use to include statements designed to handle interrupts is desirable. (See, for example, the discussion of the Yale-IBM system,Chapter 2, Section E.)

Magnetic media—drums, disks, and standard magnetic tapes—are employed here. DEC tapes are useful and reliable, but they have only a small capacity. The use of such microtapes is also limited by their incompatibility with typical computer-center equipment. Reliable, inexpensive incremental magnetic tape units are now available which can be operated asynchronously at about 300 Hz, too slow for many purposes. Some of them can also be run much faster in a synchronous mode. Drums and disks are highly desirable because they provide program-controlled rapid access to great volumes of data. Typically, access times are of the order of 17 µsec. In the past few years, good and inexpensive disks have been developed which are now on the market. Some suppliers are IBM, CDC, Datadisk, Burroughs, DEC, and SDS. Disk storage is cheaper per word than core storage by two orders of magnitude; therefore, it is preferable for applications where data can be organized serially and where access and transfertime requirements can be relaxed somewhat. For example, a small DEC disk system for the PDP-8 holds up to 128k 12-bit words and has an average access time of 17 µsec and a transfer rate of 16,000 12-bit words per sec. It costs $6000 for the first 32k of capacity, plus $3000 for each additional 32k, including interfacing through the direct data channel. Larger and faster versions are available. Disks (or drums) should be important in future systems. Magnetic tapes of the IBM-compatible type are valuable, especially for communication with machines at computing centers, but tape drives and interfacing are usually expensive. It often costs $25,000 or more to get a single tape drive in service, although the next few are usually less expensive. The cheapest tape drives available cost about $5000. The cost of interfacing depends greatly on the particular computer. It may be as little as $5000, but it is often in the neighborhood of $15,000 or $20,000.

Because they provide immediate access, the most satisfactory program storage media are magnetic disks and drums, followed by the IBM tape. The most satisfactory cheap device for input of programs is the high-speed, punched-tape reader, but the advantages of using small "cartridge-type" magnetic tapes have recently been emphasized. Recently, card readers have appeared which are much cheaper than the older IBM models. They can read 200-300 cards per minute. They cost about $2000 plus interfacing. Examples: Soroban, General Devices, Uptime.

A simple means of restoring the basic loader program (other than toggling!) is desirable. Many computers have this feature, e.g., the IBM 360 series; the SDS Sigma 2, Sigma 5, and 910 PDP-9.

Hardware memory protection is necessary in multiprogram systems. It is very helpful in any machine with a batch-processing resident monitor and in other special situations.

This feature is useful for purposes as detecting memory failures, but it is usually not worth its cost in computer speed and in capital investment in the case of a small system.

This is a big subject, partly because the organization of computers for input and output of data varies with the manufacturer. Some computers such as the Hewlett-Packard and the DEC models are especially easy to interface, whereas the automatic channels of the SDS Sigma computers and the ordinary IBM machines (e.g., the 360 series) are very difficult. The IBM machines require an expensive control unit. It is said that before a competent engineer could order plug boards for Sigma interfacing he would have to study the system for a month or two. However, once interfaced, these machines permit rapid input of data. Interfacing a $5000 Calcomp plotter to the automatic channel of an IBM or Sigma series machine may cost much more than the cost of the plotter.

Many small computers use teletype machines as console typewriters. The ASR-33 teletype has not performed well, but it has recently been improved. The ASR-35 and KSR-35 have excellent records, and the newer ASR-37 and KSR-37 (15 characters/sec) are very good. The IBM Selectric has had a mixed reliability record which is, however, improving. In every case, expert routine maintenance is required.

These are a valuable asset to efficient programming. At least one, and preferably more, is desirable, especially in the medium and large computers.

These are of great use for obtaining a permanent ("hard copy") record, especially when large volumes of output are produced; however, they are expensive, usually costing $20,000 or more (including interfacing). In order to avoid tying up a large central processor during typewriter output of masses of data, a line printer is not only very useful, it is essential for efficient operation (and to spare the typewriter). A line printer can beimmensely helpfuland can save much time in the process of developing and debugging programs. The cost, however, will often preclude its addition to a modest system. If the system has an IBM-compatible tape drive, the computer output can be written on tape and later carried to a computing center for printing. Several industrial concerns are known to be working on new types of printers, some being dry-copy, nonpercussive types. One type which has already been marketed, the Inktronic printer, operates by spraying ink at the paper from small tubes. The characters are well formed. It operates at about 120 characters per second and costs $5600. Conveniently, it requires standard Teletype interfacing, and it can be ordered with an optional keyboard. Although it has exhibited a few new-product ailments in its first 8 months or so of use, it shows promise of becoming a very useful device. Another printer operating on a similar principle has just appeared—the A.B. Dick Company's Videojet printer, priced at about $4900.

The overwhelming favorite is still the incremental machine called the Calcomp plotter. It costs about $5000 and is easily interfaced to many computers. It is very accurate (about 0.01 in. in each direction) and provides valuable output to the experimenter. It can be programmed to plot experimental points and theoretical curves together on white paper in India ink, relieving draftsmen of considerable work and doing a more precise job. Other incremental plotters are now on the market, e.g., the Houston Instruments version. Varian has developed an electrostatic plotter to sell for about $15,000.

At least four types are in use. The standard scheme involves the displaying of bright spots under control of the computer, which has generated appropriate words to causexandydeflections of the spot when those words have been transformed by ADC's in the CRT unit. The pattern is rewritten continuously. A light pen held against a particular part of the display pattern can be used to signal the computer. This scheme works well but may produce a flickering image if the computer is interrupted frequently to handle higher priority jobs or if the display is so complicated that the rewriting period exceeds 1/30 sec. The expensive hardware option called a character generator is considered not worthwhile unless large amounts of text are to be displayed. On a 10 in. x 10 in. raster a matrix of dots 1024 x 1024 is sensible.

A second scheme involves a disk or drum on which the computer writes the words to generate the pattern. Separate reading heads send the words to the CRT unit. Thus the display, automatically rewritten over and over, is updated from time to time by the computer. The light-spot cursor and joy-stick method replace the light pen in this case. (In passing, it is worth remarking that a light pen is only as effective as the computer program allows it to be, that the effort of programming for light-pen control is usually not trivial, and that a substantial amount of core storage may be required. A means of display control perhaps not so popular as it should be is sense-switch control.)

A third scheme makes use of a modern storage CRT. The computer sends the pattern to the CRT only once, and the display can persist until erased. This method is flicker-free and inexpensive, but the pattern is not so distinct and sometimes not so bright as in the above schemes. However, it is cheap. Furthermore, the storage tube can be used alternately as an ordinary CRT with quite satisfactory resolution. A storage version is thus possible which reverts to the standard scheme, for high-resolution inspection, when a button is pushed. The storage-tube scheme is probably the best buy for use in a typical small system. The Tektronix Company has recently announced a storage-tube device, Type 4501, which is said to generate a continuous video signal suitable for driving large-screen television monitors.

A fourth scheme involves the generation of a video (analog) signal corresponding to the display, written on a disk or drum by the computer. Reading heads then send the video information to a CRT having a TV raster synchronized with the rotation of the medium. This is a good scheme where many displays are needed, but it is too expensive for many applications, costing upwards of $20,000 for the first unit. (For example, the Data Disc System 6500 Display costs about $23,000.)

One display feature considered desirable by many nuclear physicists is rotation of isometric data plots. This can be accomplished in one of two ways: recomputing every displayed dot or using an appropriate analog device (potentiometer). Because the latter is so cheap, clearly its use is more desirable than the recomputation of the rotated view. Also, using a light pen on a recomputed display is especially difficult because the inverse computation has to be performed in order to maintain proper correlation with the original data. However, it should be noted that the TV raster technique is limited in this respect: rotating potentiometers cannot be used, and the image must be recomputed. The technology of displays is developing rapidly.

In many cases, especially where typical standard operations are involved, it is preferable to use external devices to handle preliminary selection and sorting of events, rather than to ask the computer to do the entire job. For example, particle identification by use of signals from two counters involves one or two multiplications and additions, which can be carried out almost instantly by a fairly simple external analog device, whereas a small computer would likely require at least 500 µsec for the job, assuming calculation, and perhaps 40 µsec, assuming table look-up.

Computers as small as a PDP-8 have been successfully time-shared by several users in special applications. The justification given is that all the peripheral hardware can be shared also, so that the added constraints and programming difficulties are balanced by savings in hardware costs.Computers have also been shared for simultaneous on-line data-taking in low-data-rate experiments. In working out the economics of time-sharing, the added hardware (such as CRT's and remote consoles and memory protection) needed to allow simultaneous access by more than one user, as well as the extra memory space needed by the time-sharing monitor, should be considered. The greatest costs, however, lie in the added constraints placed on each of the users and in the greatly increased cost of programming. In many cases the use of two or more identical computers is preferable. However, in large, expensive systems time-sharing can be very useful.

Complete documentation should be provided, including listings, step-by-step user instructions, and some fully worked out examples.

a. Hardware diagnostic routines: To test memory addressing, instruction set and to test correct operation of every peripheral and special hardware feature.

b. Systems to edit, assemble, and debug programs in symbolic machine language: These should efficiently use any special I/O device such as magnetic tape, disk, or line printer.

c. Efficient subroutines should be provided for operation of any special peripheral device purchased from the computer manufacturer. Symbolic language source tapes or card decks, listings with comments, and examples of use should be included.

d. Conversational Fortran-type programs provided by some manufactures are useful for supplemental calculations.

NOTE:The following points apply particularly to the medium and large machines and become increasingly important as the computer becomes larger and more complex.

e. Fortran compiler and operating system, with convenient method to insert machine language instructions and subroutines. Good compile and run-time diagnostics are essential.

f. Mathematical subroutines should be provided in binary and source language.

g. Complete specifications and documentation for the programming system should be supplied, so that programs prepared by users can be made compatible. It may be objected that this will cost too much, but not to do so will be very costly and frustrating to many users.

Experience at Brookhaven and Berkeley has shown that a programmer can produce between 10 and 20 debugged and documented lines of program per day, depending on such factors as experience, when he is working on reasonably straightforward programming. When working on a complicated monitor system he would be considerably less productive. System programming is obviously very expensive, therefore the average person exploring the computer market would be well advised to consider the software support along with the hardware offered in each case. Manufacturers vary greatly in this respect. A major contributing factor to the persistent popularity of the PDP-8 is that the software support is so extensive.In general, the newer a computer, the less software is likely to be available.

The movement toward computer systems began in earnest about 1962. Much of the early work depended on the use of magnetic tape for storage of data, either raw or partially digested, the analysis of data being carried out later, off-line. More recently, computers have been used increasingly for on-line processing. The early work is well known and will not be described here. Some of the more recent systems are basically very close descendants of one or another of the early systems. Many varieties are now in service. Most incorporate small or medium-sized computers, however, extensive new experience has been gained during the past two or three years of operation of a few large time-shared systems, in particular those in the tandem Van de Graaff accelerator laboratories at Yale and at Rochester, perhaps the first large systems in operation which were planned systematically for nuclear research. Both operate with multiprogramming monitor control, background calculations being possible, on a low-priority basis, simultaneously with data acquisition.

Simple rules for the design of various types of data-acquisition systems cannot be stated, but some examples of possible systems can be given. (See Figure 1.)

a. A simple system for pulse-height analysis work can be assembled from a small computer, a 5-in. Tektronix CRO, an ADC unit, and a teletype with paper-tape attachment for a cost of about $30,000, providing that a competent engineer is available, not counting programming and engineering costs. A Calcomp plotter could be added for about $6000. To maintainand operate the system at least a half-time technician-programmer would be required.


Back to IndexNext