With Renewables, It’s Location, Location, Location

By Catherine
Share on LinkedInTweet about this on TwitterShare on FacebookShare on Google+

By Catherine Bolgar

As renewable energy projects multiply, and with momentum expected to accelerate thanks to the Paris Agreement on climate change, advances in modeling and simulation are improving site selection in order to maximize the return on investment.

A flood of new, mostly small entrants are providing simulations using meteorological parameters to compile detailed information about solar irradiation and wind conditions. “It’s relatively new. It’s disrupting the market,” says Nicolas Fichaux, senior program officer at the International Renewable Energy Agency (IRENA), the global intergovernmental organization for renewable energy, based in Abu Dhabi. “If you’re on a remote island in the Pacific, you can contact a private company and, for a couple of thousand dollars or euros, buy data about how much solar or wind power you have in a specific location. It will give you a good overview on the site quality. If you move ahead, you will combine it with local measurements to develop a bankable project.”

IRENA helps governments looking for the best sites for renewable energy projects. Considerations include not only the amount of sun or wind, but also topography, environmental factors and the distance to grid connections and population centers.

“We can help a government to select the combination of technology and area where energy will be the most cost effective,” Dr. Fichaux says. “We can say, ‘If you develop this cluster, this is the price of electricity you could expect, depending on the cost of capital.’ We can also test policies—for a given tariff, we can assess whether the returns will be fair. We can do this for every square kilometer on the globe. Three years ago, nobody could do that.”

The models use decades of public data from satellites, as well as other information, such as aircraft data and detailed mapping.

“We take the initial conditions—the larger-scale portrait of the atmosphere—and we improve the resolution and have more details of the flow characteristics,” says Gil Lizcano, Brussels-based director of research and development at Vortex, a cloud-based wind-modeling company headquartered in Barcelona. “We make meteorological models with very high resolutions of 100 square meters. You need to compute with high resolution to distinguish areas within a wind-farm domain.”

Clients may be project developers, manufacturers, governments or other entities. Locations may be greenfields—for example, a government wants to know the best places in a country for wind farms—or microsites, where the locations have already been chosen and the question is how to distribute the turbines for maximum effect, including their spacing and height. Technical advances have increased turbine capacity, blade length and tower height.

“Now modern turbines are 80 to 120 meters, even higher,” Mr. Lizcano says. “This 40 meters can make a lot of difference. Wind increases with height, and we need to know how much.”

Turbulence is another factor, reducing machine performance and lifespan; stronger machines are available but more expensive, and the client needs to know whether they’re necessary and worth the investment, he says.

Similarly for solar irradiation, the SoDa service combines a database of images taken every 15 minutes for 12 years by the Meteosat satellites with geographic data, such as altitude and land cover, to show where solar irradiation is high or low, with a resolution of a square kilometer. “On the scale of a country, this is very precise,” says Etienne Wey, general manager of Transvalor Innovation, which operates SoDa and is part of Transvalor SA, a Mougins, France, company that works closely on research with the Mines ParisTech engineering school.

Steep slopes, forests, swamps and farms aren’t ideal for solar plants; neither are places that might be very sunny but are far from population centers. “We use techniques of exclusion to filter the area with other data, then rank the site based not only on solar irradiation but also on, say, the distance to an electrical sourcepoint, because it has to plug into the network, or availability of water for cleaning mirrors if it’s concentrated solar power. Then we make a map with the ranking,” Dr. Wey says.

“What we have added are tools where you click on a point on a map and we will give the amount of photovoltaic production you can have at any place. Also, how much hot water you can create if you put in so many square meters of solar hot-water collectors,” he says. “We are trying to polish our crystal ball.”

For a successful integration of renewables in the electrical network, a key element will be the ESS – Energy Storage Systems. Such systems – commonly called batteries – will ensure a minimum level of energy availability as well as a good level of energy quality. Whereas only around 2 GW is installed today, IRENA estimates that the world needs 150 GW of battery storage to meet the desired target of 45% of power generated from renewable sources by 2030.

Once sites are narrowed down, customers need to complement the satellite data with measurements taken on the ground. “The amount of radiation over a year could have an error of 3%-5%,” Dr. Wey says. “On a global scale it’s not much, but as competitive bids for solar plants are more precise and cheaper, missing a prediction by 2%-3% could mean losing money. The ground measurements correct any bias in the satellite data.”

Other companies are creating prediction software to forecast the output of a photovoltaic farm or wind farm over the next three, six or nine hours, so the operators can bid on electricity markets and maximize revenues, Dr. Fichaux of IRENA says.

“There is a global understanding that there is a lot to be done on renewable energy,” he says. “We’re ready to move, but we need to estimate how big it will be, how much it will cost and how it will take place. It’s creating a boom in detailed modeling.”

 

 

Catherine Bolgar is a former managing editor of The Wall Street Journal Europe, now working as a freelance writer and editor with WSJ. Custom Studios in EMEA. For more from Catherine Bolgar, along with other industry experts, join the Future Realities discussion on LinkedIn.

Photos courtesy of iStock

Our Future Nuclear Challenge: Decommissioning Plants

By Catherine
Share on LinkedInTweet about this on TwitterShare on FacebookShare on Google+

 

By Catherine Bolgar

Nuclear power plants last a long time, but not forever. Around the world, 158 nuclear reactors have been permanently shut down, although only 15 have been fully decommissioned, or dismantled. The U.S., which has by far the most nuclear facilities, has decommissioned 10 commercial reactors and has 18 others going through that process. Germany decided in 2011 to phase out nuclear power entirely by 2022.

The question of how best to dismantle nuclear power plants will continue far into the future, considering that 60 reactors are under construction now, and that nuclear plants operate for 30-60 years.

“Our reactors had an initial design life of 30 years,” says Jerry Keto, vice president of nuclear decommissioning for Ontario Power Generation, which operates 20 of Canada’s 22 reactors. “But in operating them, we have confirmed that the plants can run much longer. Most plants in the U.S. and Canada confirm through aging-management programs that they’re fit.”

Some countries choose to dismantle and decommission their nuclear facilities immediately after shutdown. Some prefer deferred dismantling and delay the process for several years. Others choose entombment and convert theirs into waste-disposal facilities, after ensuring that the targeted end-state of the facilities is safe, according to Michael Siemann, head of the Nuclear Energy Agency’s division of radiological protection and radioactive waste management, part of the Organization for Economic Cooperation and Development (OECD).

“The choice of approach depends on many factors, some of which may be related to national circumstances,” Dr. Siemann says. “Nevertheless, immediate decommissioning is usually considered as the preferred strategy, meaning that the shutdown nuclear-power plants are usually dismantled and major parts also completely removed.”

One of the challenges in decommissioning is that for “some plants designed in the 1960s and ‘70s, not a lot of attention was given to fact that somebody has to take them apart in future,” Mr. Keto says. “Now, decommissioning is entrenched in design for new construction. The new reactors being built in the United Arab Emirates were designed already thinking of how they will be dismantled in the future.”

The age of many plants means that their original designs were on paper. “The main problem is there is so little information about the state of the buildings. The plans of the plants were mostly nondigital,” says Joseph W. Dörmann, mechanical engineer at the Fraunhofer Institute for Material Flow and Logistics, in Dortmund, Germany. “While the plant was running for 30 years, there were changes to the building. It’s really difficult to have a clear 3D model of the plant. It’s difficult to find out where, for example, radiation has had an impact. You need a lot of time to develop the information, to know which part of the plant can be demolished, by which technology, and whether to bring to safehouse”—a special nuclear-waste repository—“the nuclear waste or materials that have had contact with nuclear waste.”

Rather than try to digitize paper designs, OPG makes scans of the plants, “so we’re less reliant on what the paper says and totally reliant on the real area. Especially in high-radiation fields, it’s not a place for sending workers,” Mr. Keto says, adding that digitization is needed so that machines can be programmed to do the work automatically.

Where workers are needed, OPG even builds exact replicas of the area for training and practice, so that when workers enter a radioactive zone they can get the work done in the shortest period possible. OPG hasn’t yet decommissioned full facilities, Mr. Keto says, but it has “done some very highly radioactive work in our plants. We have permanently shut down two reactors and laid them up for future dismantlement.”

Only a small part of a typical nuclear plant is highly contaminated, with slightly more low-to-medium-level waste. About 90%-95% of a nuclear plant isn’t radioactive at all and can be demolished like any other industrial building, with waste taken to landfills or recycled, Mr. Dörmann says.

The walls of such plants can be 1.5 to 2 meters thick, and even in contaminated areas only the first 2-5 centimeters may be affected, he says. New cutting techniques, cameras and robots allow the contaminated part to be removed separately.

Concrete may be blasted with dry-ice pellets rather than sand, because dry ice disappears when it melts, whereas used sand must also be disposed of as contaminated waste. Scabbling is another technique for removing a layer of concrete. Rubble is then cracked into morsels smaller than 2 millimeters.

Non-contaminated material can be recycled as sand, concrete and metal. Parts of Germany’s Jade West Port on the North Sea used materials from a demolished nuclear plant, Mr. Dörmann says, adding, “It’s a way to do recycling on a large scale.”

The steel bars reinforcing the concrete can be melted down and recycled. Metal that’s slightly contaminated is turned into shield blocks for use at other nuclear facilities.

Nuclear facilities can’t be retrofitted into conventional power plants because they aren’t built for the same levels of heat, and the turbines are different, Mr. Dörmann says. However, the office buildings and even some other facilities can be renovated for new uses. With cooling tower foundations 18-21 meters deep, keeping them and building a new facility, such as a logistics center, on top would vastly reduce the amount of rubbish, he says.

“Given the low number of only 15 completed decommissioning projects world-wide,” says the NEA’s Dr. Siemann, “it is too early to draw conclusions and to derive trends in the reuse of entire sites after decommissioning completion.”

 

 

Catherine Bolgar is a former managing editor of The Wall Street Journal Europe, now working as a freelance writer and editor with WSJ. Custom Studios in EMEA. For more from Catherine Bolgar, along with other industry experts, join the Future Realities discussion on LinkedIn.

Photos courtesy of iStock

 

Better Batteries Stabilize the Electric Grid

By Catherine
Share on LinkedInTweet about this on TwitterShare on FacebookShare on Google+

By Catherine Bolgar

Energy storage for the electric grid is taking off as the technology improves and battery prices fall. The global capacity of storage connected to the grid is expected to grow to 15-fold to 21 gigawatt hours this year, compared with 2015.

Energy storage can take many forms: freezing ice, then using a fan to blow over it and cool a building, replacing air conditioning; melting salt, then splashing water on it to create steam that powers a turbine; compressing air or other substances; pumping water up a hill behind a hydroelectric dam; flywheels; rechargeable flow batteries that use liquids; and solid-state batteries.

“There are so many ways to store energy. All are viable in their own way. All have applications and scale that they are suited for,” says Matt Roberts, executive director of the Energy Storage Association, a Washington, D.C.-based industry group. “The lion’s share being installed today is lithium-ion batteries.”

Industrial sites may use energy storage, often in the form of batteries, in order to reduce their peak power demand and cut their electricity bill by two-thirds to three-quarters, he says.

Most storage, though, is for controlling the frequency on the grid—60 Hertz in North America and 50 Hz elsewhere, which is achieved when supply and demand for electricity are in sync. If there is too much supply, substation transformers may be damaged; too much demand can cause brownouts.

Traditionally, the fluctuations in supply and demand have been smoothed out by peaking power plants, often fueled by natural gas. However, they may take three to five minutes to react, Mr. Roberts says. “In that time, the entire thing could swing in the other direction. Using a natural-gas plant for frequency control is like using a club for a surgical procedure.”

By contrast, battery storage can react in 100 milliseconds or less, says Andreas Ulbig, research associate at the Power Systems Lab of the Swiss Federal Institute of Technology Zurich (ETHZ) and co-founder of Adaptricity, a Zurich start-up that simulates active distribution grids. “Batteries are able to fill the gap with rapid response for balancing out renewables or reacting to any change in grid operations.”

In Europe, ancillary services—regulating frequency—from conventional sources and batteries get paid the same, he says. But in the U.S., the PJM Interconnection, which coordinates wholesale electricity in 13 Midwestern and mid-Atlantic states, pays battery owners a bonus for providing frequency control because they are so much faster, and therefore higher quality.

Under the PJM system, “a gas-powered plant chasing the grid signal can run at 99.9% efficiency 100% of the time,” Mr. Roberts says. “It means more profits, a better emissions profile, and less wasted energy on the grid.”

Energy storage is key to making smart grids and super grids work by balancing fluctuations over wider areas, using automation and modeling.

However, “most modeling systems are based on outdated asset class systems”—electricity generators such as power plants and photovoltaic arrays—Mr. Roberts says. “An energy storage system doesn’t generate electricity, but when it pushes energy onto the grid it looks like a provider. But it can also look like it’s absorbing energy. Current simulation systems aren’t sophisticated enough. They still model for the power plant spoke-and-hub model of the 1970s.

Models and simulations are improving. ETHZ and Adaptricity have created algorithms that allow battery owners to provide ancillary services that use less battery energy capacity while providing the same control services, Dr. Ulbig says. “It shows that smaller batteries can provide the same ancillary services as those with higher energy capacity.” Energy capacity is the biggest factor in the cost of batteries, so being able to get the same results with smaller batteries can cut costs significantly.

The importance of energy storage is set to grow as renewables make up a bigger share of the energy mix. The way that conventional power plants generate electricity, with gigantic rotating masses, creates slower deviations in frequency. With more renewables on the grid, “changes in grid frequency may happen faster. So it will be particularly useful to have faster frequency control,” Dr. Ulbig says.

Energy storage is set to grow, because it can “create a grid that integrates renewables, is flexible and resilient,” Mr. Roberts says. “It’s more cost effective and valuable.”

 

Catherine Bolgar is a former managing editor of The Wall Street Journal Europe, now working as a freelance writer and editor with WSJ. Custom Studios in EMEA. For more from Catherine Bolgar, along with other industry experts, join the Future Realities discussion on LinkedIn.

Photos courtesy of iStock

 

 

 

 



Page 1 of 212