The Global Solution for the Coming Energy Crisis
by Ralph Nansen
Copyright 1995 by Ralph Nansen, reproduced with permission
Chapter 11: Costs are Based on Sound Estimates
The cost of electricity generated by solar power satellites compared to coal reaches to the heart of the reason to develop solar power satellites. We cannot ignore the immense economic benefits, not to mention the environmental necessity to stop polluting the atmosphere and cease further accumulation of deadly wastes from nuclear power plants.
The real issue is the validity of the cost estimates that make up the basis for these cost comparisons. Even though estimating is part fact and part guesswork the estimates must be close to reality or the great economic advantage is lost and the system will not be widely deployed. Many factors go into making cost estimates, including related experience and the quantity of parts being manufactured. When a new venture is being developed it is often impossible to compare everything to previous experience, but by comparing some we are able to predict the remainder from similar experiences of the past.
The solar power satellite is simple in concept, but is unique because of its huge size. Because of its simplicity it has only a limited number of different kinds of parts. Its size is the result of assembling a vast number of smaller, identical parts together. For example, all of the solar cells are the same, all of the radio-frequency generators are the same, and most of the structural pieces are the same. As a result of this characteristic most of the cost of the satellite hardware is dependent on determining the cost of items like solar cells that will be manufactured in mass-produced quantities, even for the first satellite. Each satellite would have over half a billion solar cells, more than five times current annual world production. To reach this production quantity will necessitate factories designed for mass production. As we know from experience, the more parts produced, the lower the cost.
In this chapter we will consider some of the critical elements and why their cost estimates are reasonable.
Previous Space Investment
In 1961 when President Kennedy announced the goal of sending men to the moon we were but three years into the space age. The famous rocket designer Wernher Von Braun had developed an idea of how it could be done in 1953. Even Jules Verne, the great visionary and author, wrote a science fiction story near the turn of the century about man’s journey to the moon. But the actual amount of serious study of the subject was very limited. A small group from NASA had been working quietly for a couple of years, but that was about all. None of the launch vehicles, spacecraft, or lunar landers were developed, and the space industry base was only in embryonic form. The massive test and launch facilities did not exist. The only large booster under development was the much-too-small Saturn I, and space engineering experience was mainly with military missile systems. In fact, the starting base for such a massive technological undertaking was shaky indeed.
The list of different components, subsystems, and vehicles that had to be designed, developed, and tested to form the complete Saturn/Apollo program staggered even the most optimistic mind. Congress wanted to know what it was going to cost before they would authorize the initial funds. NASA pieced together the best estimate they could based on limited experience and multiplied that by an uncertainty factor to come up with a cost. Their projected program expenditures were estimated to be $24 billion in 1961.
In the ensuing years, as the program evolved, a major new industry was developed. A sprawling, empty, tank assembly plant at Michoud in New Orleans, Louisiana, was gutted, rebuilt, modernized, and expanded to assemble the giant Saturn boosters. The swamps and bayou country near Pearl River, Mississippi, soon throbbed to the mighty roar of stage test firings conducted in the new test stands squatting near the water’s edge. In Huntsville, Alabama, giant test facilities grew out of the ground, changing a quiet southern town into the “place where space begins.” In Clear Lake City outside of Houston, Texas, NASA brought together the finest team of space engineers and scientists ever assembled. The largest building on earth rose out of the Florida sand to house the coming monsters. Launch complex 39, with its giant crawling transporters, was made ready. In the aerospace companies, engineers learned a new language—no more was it “dihedral” and “wing sweep angle”; it was now “delta-V, thrust-to-weight, mass fraction, staging velocity, insertion velocity, perigee, apogee, main engine cut-off, ISP, chamber pressure”—a whole new vocabulary. We had to learn to think in thousands of feet per second, not hundreds of miles per hour. We learned from a few old-timers in the space business—if you can call people with two or three years’ experience “old-timers”!
New trails were blazed. Out of the colleges came bright-eyed, eager young engineers drawn by the challenge, refusing to believe that it could not be done. This new breed was joined by others who came from designing steam boilers to designing propellant tanks. Factory people came from everywhere. The task they undertook was clearly the largest and most technically complex ever attempted. Starting from an ill-defined base and inventing the needed technology these young tigers were dedicated pioneers at the leading edge of man’s knowledge in the summer of 1962. Yet, the task was accomplished at a total cost very close to the original estimates. If we were to consider the $24 billion spent on the moon program in today’s dollars, the cost would be approximately $120 billion.
Technology Starting Point
The technical challenge of the solar power satellite is comparable to the Saturn/Apollo moon program in complexity and development magnitude, except for two distinct differences. We understand the overall satellite system much better than we understood the Saturn/Apollo system at the beginning of the moon program in 1961—we are not starting from scratch. More important, we now have nearly four decades of space experience behind us. All the technology developed by the lunar program, the shuttle program, the National Aerospace Plane research, numerous satellite programs, terrestrial development of solar cells, development of the Space Station, and all of the big radar systems give us a solid base on which to build.
Space Transportation—A New System is Required
When the United States’ first successful satellite hurtled into space on top of a Jupiter C rocket in January 1958, the cost for delivering that payload to orbit was in the hundreds of thousands of dollars per pound. With increased knowledge and abilities, we moved on to bigger and better rockets. Even though we were still throwing away an expensive rocket, the cost of launching Skylab on top of a Saturn V was on the order of one thousand dollars per pound.
Space Shuttle with its promise of reusability was a big new step. We only throw away the external fuel tank and some of the hardware of the solid rocket motors on each flight. The shuttle orbiter, with the expensive engines, electronics, and controls, returns intact to the earth and lands at the launch site like an airplane. It is flown again and again. This is the first generation of partially reusable launch vehicles. Unfortunately the promise was greater than the reality and it has not achieved its launch cost goals, primarily due to politically driven decisions and poor management during development.
The National Aerospace Plane was supposed to fly single-stage to orbit (SSTO) and back with a fully reusable vehicle. This research program used a ramjet/scramjet airbreathing engine and made some major steps forward in the development of high temperature materials, reusable cryogenic insulation, and hypersonic aerodynamics before funding limitations and technical problems with the engine lead to its termination.
There have been some experiments with a single-stage-to-orbit reusable rocket-powered vehicle that looks promising. The first test for maneuvering and landing has already been completed, but government funding is drying up. That is too bad because in a larger size it could be a serious candidate for launching the satellite hardware.
A different concept, developed during the studies in the late 1970s, used a two-stage, winged, fully reusable, vertical takeoff, rocket-powered vehicle, with both the booster and orbiter returning to the launch base to land horizontally like the Space Shuttle. This seemed to be a good concept at the time and is still the most likely candidate for launching heavy payloads.
Even more intriguing, particularly for lighter payloads and as a personnel carrier, would be a large airplane capable of Mach 3 speeds to carry aloft a rocket-powered orbiter to be launched at Mach 3. Both the jet engine–powered airplane and the winged orbiter would return and land at the launch site, with no hardware discarded.
Since the opening of Russia to the outside world, their technology is also available. Engineers who have visited Russia and seen their rocket engines are very surprised at the level of technology and efficiency they have reached. In many areas it is considerably ahead of what has been has been done in the West. The use of Russian-designed and built engines could be a major advantage.
The stage is set by all this activity to take the final step to a fully reusable space freighter and personnel carrier. By not throwing away expensive hardware it will be able to provide the low cost necessary to build economical commercial solar power satellites. Whatever the final design it must incorporate the three basic principles essential to all good transportation systems.
First and foremost in importance is minimal maintenance. This certainly means they can no longer be thrown away and replaced after each flight.
Second, it must be able to carry large payloads and have convenient loading and unloading capabilities, using pallets or cargo containers similar to containerized ships. The use of pallets or containers will expedite loading and unloading of launch vehicles in the same way it did for ships and railroads.
Third, whatever the design, the vehicles have to be able to fly over and over again. A typical airline keeps their airplanes in the air more than 10 hours a day, year after year. They don’t make money when the planes are on the ground. A space freighter must have similar capability.
A system designed and operated to these requirements will become a mature transportation system like airlines, railroads, ships, or the trucking industry. The cost of operating mature transportation systems is a function of the cost of their fuel. Operating cost is typically between two and five times the cost of fuel. Today the cost of space transportation is well over a thousand times the cost of fuel. When there are frequent flights and nothing is thrown away, space transportation vehicles will join the ranks of other mature transportation systems. Their operations cost will certainly fall below ten times fuel cost and eventually reach no more than five times the cost of fuel.
The number of flights needed to launch solar power satellites will require the development of a new reusable space transportation system, but this is not the only program that needs low-cost space transportation. The growing use of cellular wireless telephones is leading inexorably toward space-based worldwide systems. Motorola is already working on their Iridium system, and Teledesic is proposing a more aggressive program that would use 840 satellites. Space travel for the average citizen is the goal of one new entrepreneurial company. As a result of these and other requirements, solar power satellites is not the only program that must pay for the development of a new space transportation system. The cost of this significant part of the infrastructure can be spread between several users.
Solar Cells—Becoming a Mature Industry
Solar cells are another example of space-related development. They are the core of the solar power satellite concept, so their cost is pivotal to the success of the system. In 1980 the uncertainty of their projected cost was one of the reasons for terminating development of solar power satellites. Most of the United States satellites launched over the last 35 years have received their electrical power from the sun via photovoltaic devices called solar cells, which convert sunlight directly into electricity. These cells usually were made from silicon and were very expensive because of the laboratory processing for the small numbers used. Through the years, their efficiency has been steadily increasing, and in recent years, the impetus to develop low-cost solar cells for terrestrial use has greatly increased research in the field.
In the spring of 1994 my colleagues and I conducted a survey of the United States solar cell industry. We visited nine manufacturing facilities, three national laboratories, and four utilities to assess the progress made over the last fifteen years. What we found was very encouraging.
Current world production has grown to about 100 megawatts annually; these are principally terrestrial solar cells of varying types. This level could not support a solar power satellite program at this time, but none of the people we talked to were intimidated at the prospect of the industry as a whole being able to expand production to the volumes needed for a solar power satellite program. The only comments in this regard were that “given sufficient time and sufficient demand” the accumulation of the necessary tooling and the development and use of better mechanized production facilities would certainly take place.
Many photovoltaic manufacturers are building solar cells and arrays using many different technologies. In addition to crystalline silicon and gallium arsenide cells that are the close relatives to transistors and computer chips, technologies now include thin film cells of several different materials. These cells are made by depositing a very thin layer of photovoltaic material on a substrate of plastic film, metal foil, or thin glass. While not as efficient as crystalline cells, they can be lightweight and use very small amounts of expensive materials and have the potential of being very inexpensive. The two most promising thin-film cells are copper indium diselenide and cadmium telluride.
However, crystalline silicon still has the largest market share of commercial production. It has held this position based on improvements in manufacturing techniques and steadily increasing efficiencies. Today a typical commercial silicon solar cell is a thin wafer four inches square that is either a single crystal or polycrystalline. The wafers are made by cutting a thin slice from a silicon ingot. These single crystal ingots are made by an interesting process called a Czochralski pull where a single very large crystal is formed by slowly pulling a rotating plunger out of a bath of molten silicon. As it emerges the silicon solidifies in the form of a single crystal. Polycrystalline ingots are made by casting the molten silicon in a mold similar to the way iron is cast.
Dramatic advances in low-cost production techniques have been made in processing these large ingots, reducing the cost of commercial solar cells to a range of $2.50 to $5.00 a watt. With the terrestrial market continuing to expand, manufacturers are projecting $1.00 per watt in the near future. In 1980 we projected cell costs would reach 25 cents a watt in 1980 dollars when the production rate was great enough to build the first satellite. This would be about 50 cents a watt in today’s dollars. There is little doubt that with a ten- to hundred-fold market expansion from building solar power satellites the original cost target of 50 cents a watt will be achieved.
The situation is even better for cell efficiency. In 1980 we were projecting silicon solar cell efficiency would be 16.5%. Today silicon cells can be purchased that are over 20% efficient, and with concentrators to intensify the sunlight their efficiency reaches 26%. Current production efficiencies for most silicon terrestrial solar cells are in the range of 12% to 14%, but are expected to increase as new advances are incorporated into the production lines. Gallium arsenide cells are as high as 32% efficient, but their cost still limits them to high-value space applications. Even the thin-film cells have reached efficiencies over 16%, with more improvements in sight.
Our survey revealed that there are several good solar cell systems to consider for an updated solar power satellite. They vary significantly in their characteristics and range from the sophisticated high-efficiency gallium arsenide cells with concentrators to lightweight, lower efficiency thin-film systems on flexible substrates. Each has its advantages. Concentrated systems dramatically reduce the area of cells required and also the total area of the array. Conventional silicon cells combine potential low-cost production with good efficiency. Thin-film systems promise very light weight and good space radiation resistance. There is always the possibility of new photovoltaic materials being discovered as this fascinating, growing industry attracts brilliant young minds.
As we found in our recent survey there are now many good options, so when the time comes to make a firm choice, the designers will be able to pick and choose the best from several candidates.
Wireless Energy Generators—Already in Mass Production
The wireless energy beam capable of sending a billion watts of power to the earth requires many parts to make it work. One of the most critical parts is the radio frequency generator that converts the electricity produced by the solar cells into high frequency energy. This is the heart of the transmission system. During the Boeing studies in the 1970s klystrons were selected as the preferred system because of their high efficiency. Klystrons are used extensively in high power radars. There are other types of tubes that could be used as well as solid-state devices that are similar to solar cells. The frequency selected for the energy beam is 2,450 megahertz, the same frequency as a microwave oven, which uses magnetrons. The efficiency of magnetrons is not as high as klystrons, but they are being made by the millions. Since the studies of the 70s, Bill Brown, who first proved wireless energy transmission, has discovered a simple way to modify magnetrons to make them work as high frequency amplifiers with increased efficiency and greatly reduced unwanted harmonics. As a result it is now practical to use low-cost, mass-produced magnetrons for the radio frequency generators. This change will ensure predictable low cost of the energy transmitter. Today’s production cost for a 1,000 watt magnetron is about $12. A one-gigawatt satellite will use about 2 million of them.
A further step toward lower cost could be made if the efficiency and producability of solid-state devices can be improved. The Japanese used a solid-state transmitter in their successful in-space test of wireless energy transmission made in February of 1993.
Developmental Cost Estimates—Political or Real?
Now it is time to discuss cost estimates. This can be dangerous because cost projections are notoriously inaccurate and depend on a set of assumptions that are seldom followed. In addition, inflation plays terrible tricks, particularly when you are trying to project ten to fifteen years or more into the future. However, it is important to understand the magnitude of the costs, what they include, and how they were developed so you can make your own judgment as to their validity. The estimates I will use are those developed during the DOE/NASA studies because they are the best estimates available. I will discuss how they will be affected by evolving technology and changes in the value of a dollar.
In the aerospace industry the estimators handle the effect of inflation by making their estimates in man-hours and material costs by choosing a specific year on which to base their analysis. They then use an inflation factor to project the costs to some year in the future. Most of the cost estimates made by the aerospace industry for their own programs during the early concept phases use cost estimating relationships (CERs). These take into account past programs of similar complexity and account for all the costs of errors and overruns of past programs that will probably happen again.
At this point you might ask, “If that is the case, why is there so much in the press about overruns on government contracts in the aerospace business?” In order to answer that question, it is important to understand how government contracts are awarded.
It is a long and involved process that starts with a perceived need that is either seen by a government organization, such as the Air Force, or an idea that is developed by a contractor to solve a need they think exists. In either case it is not long until a product concept is developed to potentially satisfy that need. Cost estimates are developed based on cost estimating relationships. Invariably the cost is considered too high by the government agency, so they tell the contractor to find ways to reduce the cost. By this time other contractors are aware of the potential product either by being informed by the government or through clever marketing investigation.
The next step in the process is now underway. It may involve contracted studies or company-funded studies. In either event the various contractors involved set about to reduce the cost. The first steps are usually legitimate. They attempt to improve the design to reduce cost. This can be done by reducing the size or by simplification of the design. Generally this is not enough to satisfy the government agency if it is a large program that must be sold to Congress as a line item in the budget. Government procuring agencies know that the lower the projected cost, the easier it is to sell as it moves up the bureaucratic leadership ladder on its way to Congress, so they send the contractors back to the drawing boards to find a lower cost number.
Now the government procurement game starts in earnest and what follows is a sad discourse on our government agencies, members of Congress, and participating contractors. There is nothing about what happens from this point on that can be considered legitimate or honorable. It is only “legal” because it is nearly impossible to prove conspiracy in what happens. The key players are very skilled at the game, while most of the minor participants are unaware of what is going on in the process.
Contractors reduce their program cost estimates in order to satisfy the demands of the procuring agencies even though by this time the contractors have already incorporated all the legitimate costs reductions. So now they must lie. Program managers can hardly tell their employees to lie about the estimated costs, for someone would certainly blow the whistle. So the easiest way to work around that problem is to establish a set of estimating ground rules that will result in a reduced number by not telling the whole truth. It is important to keep the number just low enough to sell the program, because it will be necessary to reduce it some more later, as you will see.
After the studies have defined the program and sold the government on its merit, it is time for the request for proposal (RFP) or request for quotation (RFQ) issued by the government procuring agency, which specifies the requirements for the item they want to buy. Included is all of the criteria that must be met by a successful bidder and how the proposal will be evaluated by the government to select a winner. In most cases, cost is at the top of the list of selection criteria. The contractors know from experience that they must submit the lowest bid to win, or at least low enough to make the final round in the selection process, where they will be given one more chance to make a best and final offer. They also know that the procuring agency needs a low bid so they can convince Congress to fund the program. They know that after a program is funded by Congress it is nearly always continued even when there are serious overruns. This sets the stage for what actually happens.
Both the procuring agency and the bidding contractors are playing the procurement game for high stakes. The objective of the game is to convince Congress to fund the program. The primary rule is to not reveal the truth about actual costs because the program could not be sold to Congress and the contractor would not win the competition. There is an unstated understanding between the government procuring agency and the contractors that they will be able to make up the difference between the proposal costs and real costs through change orders or renegotiation sometime after the contract award.
The next rule of the game is to make the bids look real enough not to raise any suspicion and, most important to the company managers, to stay out of trouble during audit. That is quite easy for those who know how to play the game of hiding real costs from the public, Congress, and auditors. Experienced people don’t use real CERs when it comes time to bid on government contracts. They would never win a competitive procurement by using real cost estimates, so most companies bid the cost for which they could do the contract if they did not make any mistakes. The estimate is often made by the bidding team following guidelines from management that tell them to include only the amount of time and materials absolutely essential to accomplish the job. This kind of estimate does not show the real impact on program costs due to error correction. Management then “scrubs” this already low estimate to an even lower level they hope will be enough to win. Their speech to the employees says, “I know it will be tough, but we can do it with hard work,” knowing full well that there is not a chance of doing it for the bid price. The important thing is to win the contract; dealing with the overrun will be next year’s problem, and the government will pay anyway, so why worry.
With this approach it is quite easy to make the bid one half to one third of what it will actually cost and still be able to defend it in an audit. This works because government procurement agencies are as anxious to hide the real costs as the contractors. Does this sound like the old shell game? It is, and guess who loses.
Unfortunately, there are very few development contracts performed without error, and the inevitable result is large overruns, for which the government (that is, the American taxpayer) pays. Since most development contracts are for the cost plus a fee it is not much of a risk for the contractor. The key risk is in the size of the fee as they are usually given incentives for various items including cost. However, this is usually circumvented by change orders that pad the contract size, so instead of losing fees, the fees are actually increased. This game of how to win government contracts actually results in programs costing much more than if the true costs were stated in the beginning. The combination of initial underfunding and attempts to cut corners results in an increase in the number of errors above what would normally happen if real costs were identified in the contract bids.
Another influencing factor of contractor selection is the government’s desire to keep many companies in business in order to preserve a broad technology base in case of emergencies. This generally means that the government often awards contracts to the poorer performing contractors who cannot compete effectively in the commercial marketplace. This in turn often leads to even higher cost overruns.
The last major factor is political influence. Congressmen have now joined the game, and they play with their own set of rules. Since they control the purse strings, particularly committee chairmen, they are in a position to make sure that contracts are awarded to companies within their constituencies or to major political contributors. The procuring government agency knows about this rule and can prepare the way so that the desired contractor can win. “How can that happen in the bidding process?” you ask. Very simple. Write the RFP or RFQ with requirements biased toward a particular contractor. The loser here again is the public.
A prime example of this convoluted contracting process is the Space Shuttle. A senator from Utah was head of the Space Science Committee when the design concept was changed so that the Shuttle would use solid rocket boosters that could be manufactured by a contractor in Utah. This was not the best design, and seven dead astronauts are a testament to that fact.
This is not a new game. It is played all of the time. A classic example was the Air Force TFX procurement program many years ago. There were repeated rounds of competitions with changing requirements until finally they could select General Dynamics, a company with manufacturing facilities in Texas, a state with very powerful political leaders. The result? The F-111, which had many years of major problems, huge overruns, and in the end never really fulfilled the original requirements.
The situation is much different when a firm must compete in the commercial marketplace. There, everything has a fixed price, with an inflation escalator clause in some cases. Performance is guaranteed. If the company makes a large error in estimating costs or performance, they have to absorb the costs; if they cannot, they go into bankruptcy. There is normally no bailout from the government for huge cost overruns in the commercial world. An exception was the bailout by the government of Lockheed over the L-1011 losses and also the losses on the C-5A, which were incurred when the government attempted to make the purchase of that airplane closer to a commercial contract by forcing Lockheed to adhere to the original contract conditions. This bailout was made to keep Lockheed in business because they had so many critical military contracts.
Considering that the development of the first solar power satellite would possibly be a government-sponsored program, why would it be any different from other government programs? It wouldn’t be unless it was handled differently from a typical government procurement. Several steps could be taken to minimize the probability of it falling into the same warped pattern as normal government procurement. First, the objective would have to be clearly established and adequate funds committed so there would be no necessity to hide actual costs. This was done successfully on the Saturn/Apollo program. Second, the procurement process should be placed in the hands of a new government agency that would be established for this project only and would not yet have developed a deeply ingrained pattern of bureaucratic inertia in its people and policies. Third, fixed-price contracting should be used wherever possible, even during the development phase, with fixed-price follow-on production contracts being the carrot for the contractors to use honest estimates. Development costs should not be the primary evaluation criteria; instead, performance, procurement costs, maintenance costs, and operational costs should be the primary evaluation criteria.
In any event, when all the perturbations caused by government contracting practices are eliminated, the use of cost estimating relationships based on comparable technology of past programs and accounted for in man-hours are quite accurate for projecting the magnitude of large program costs, as long as the estimator knows the complexity of the program.
One thing that is unique to the estimates made for the cost of solar power satellite program development is the fact that it includes all costs necessary to bring the system into being and produce it as if everything had to be done for only that program. Let me explain what I mean. The cost estimates include the cost for assembly bases in space; the development of a new transportation system; the construction of a launch facility; the building of factories to manufacture solar cells, klystrons, and other high production components; and all the elements of the satellite and receiver. This is due to the fact that NASA and DOE had taken the approach on the solar power satellite program that all costs must be accounted for no matter how they are financed. This is not done for any other energy system. For example, the cost of building a new transportation system for coal, whether it be a slurry pipeline or new railroad, is never included as part of the cost of using coal as an energy source. Synthetic fuel costs were often stated as marginal cost, which means that it is assumed you do not have to count the capital cost of the facility and equipment to process the fuel.
The development costs that were estimated by the Boeing study team for NASA are similar to those developed by the other team, headed by Rockwell International. They are stated in 1979 dollars, which means that the labor and material costs are what were experienced in 1979. Included are research costs, costs of a major demonstration unit, the first full-sized satellite, and all the supporting facilities. The total is $116 billion. If the cost of the infrastructure that can be used for other applications is taken out, the remaining cost is $33 billion.
The development costs I have given are based on the ground rules and technology base of 1979. Today, even though inflation would indicate the cost should be doubled, in reality it will be dramatically reduced because much of the infrastructure development cost included in the original estimate has been developed for other reasons. The US Space Station and the terrestrial solar cell industry are but two examples. Huge savings will be achieved with the use of a ground-based prototype of the satellite power generation and energy transmission systems. As I will discuss in a later chapter this will save many tens of billions of dollars and will make it possible for the utility industry to fund the development of the power plant part of the system.
While we were studying solar power satellites, President Carter proposed spending $88 billion on synthetic fuel even though it was going to be more expensive than oil. However, with the breakdown of the oil cartel, this program was later abandoned. Eight billion was spent to build the 799-mile-long Alaska pipeline from the North Slope of Alaska to Valdez. That is $10 million per mile just to carry oil across one state. From 1983 to 1993, the US oil companies by themselves spent $192 billion on drilling and exploration for oil and gas—with a result of 30% dry holes.
When we consider how much money has been spent on other energy systems without significant benefits, the cost of investing in our future by developing a solar power satellite system does not seem to be so expensive, even if it costs as much as the original $116 billion estimate.
Production Costs are Reasonable
The results of the production cost estimates for a single 5,000-megawatt satellite was $12 billion. This cost was developed in 1979 dollars and includes the cost of the satellite, the ground receiver, and the cost of transporting it to space. This would be approximately $24 billion in 1995 dollars.
Solar Power Satellite
The production costs of much of the satellite were determined by estimating the cost of manufacturing relatively few uniquely different parts in massive quantities. The solar cells that are assembled into panels and make up the largest part of the satellite are one key component. The supporting structure is huge, but very lightweight because of the weightlessness of space. It can be built from aluminum, using automated beam builders or assembled from mass-produced beam components assembled by automated machines. Another major satellite element is the antenna parts, which would also be mass produced and assembled into subarrays. The same is true of the receiving antenna on the ground.
One third of the total cost for the system is space transportation, so it is a major factor. Without achieving low-cost space transportation the entire concept is not practical. Space Shuttle is out of the question as the launch vehicle because of its high operational cost and limited payload. It was designed with a huge external propellant tank that is thrown away on each flight, the cost of the solid rocket fuel is exorbitant, and the shuttle orbiter itself is primarily a research tool and not oriented to routine commercial operation. It requires a standing army of support personnel to maintain it, plan its flights, load and unload the payload bay, check it out for its next flight, fuel it, launch it, and monitor its flight.
The cost of space transportation for the satellite system was based on a very large, fully reusable two-stage launch vehicle that was defined to sufficient detail to verify that it can be operated at the required cost levels. Cost estimates for the entire system were based on a solid definition of the hardware.
I have stated that “production costs are reasonable.” That does not mean they are cheap. Based on how the costs were estimated and the comparative checks that can be made, the estimates are a reasonable projection of what can be expected when corrected for inflation. The spread of costs among several different categories of hardware and cost elements means that an error in one area will not catastrophically affect the total. In addition, there appears to be as much chance of the cost being less than projected as there is of it being higher.
One of the key findings of the industry survey we made in 1994 was the need to reduce the size of the satellites from 5,000 megawatts to no larger than 2,000 megawatts. The reason is the inability of the electric utility grids to readily handle a single power plant with so much output. If power is unexpectedly lost for any reason the grids must be able the absorb the sudden loss of power. Experience has shown that about 2,000 megawatts is the maximum they can tolerate without having the whole network drop off line. Therefore, it looks like a 1,000 megawatt output is a good size for an updated design.
The cost will be reduced in proportion to the size decrease. In addition, as technologies advance it will be possible to increase the efficiency of the system and thus reduce its size even more. This will provide further reductions in the cost.
Cost was one of the primary issues in 1980 when the government program was suspended. It is still an issue and will remain an issue until the first satellite has been built and demonstrates that the estimates are correct. Testing on the ground in small scale can remove most uncertainties at a very modest cost, but it will take construction of the real hardware in space to be the final proof.