The western United States, from Washington down and around to New Mexico, is facing the largest risks of droughts and wildfires in history, from global warming, poor forest management and a combination of other issues.
The subsequent risk of blackouts follows directly from that and can have dire consequences for businesses and people’s lives, like those that have occurred in California recently.
We’ve all seen satellite images of wildfires, smoke and the resultant deforestation. But wouldn’t it be nice to take all the open-access high-resolution earth observation satellite imagery the world produces and combine them with an artificial intelligence to let you detect and respond to the fires at critical times and places that could prevent the worst from happening?
And not just our satellites, but everyone’s (see figure below). How about integrating publicly available and crowdsourced images from social media networks like Twitter, Flickr, and Instagram?
And not just with visible light but with multispectral imagery, particularly thermal imaging, the most important for finding and outlining fire in its multiple levels of heat.
MORE FOR YOU
Thus, the development of RADRFIRE, or Rapid Analytics for Disaster Response for Wildfires, at the Department of Energy’s Pacific Northwest National Laboratory (PNNL) in Richland, Washington. RADRFIRE takes in all this imaging data, and combines artificial intelligence and cloud computing with damage assessment tools, to provide early fire detection, to predict the paths of wildfires and evaluate the impacts of natural disasters.
To provide actionable intelligence that can not only evaluate the risk but can predict what can happen, show what has worked in the past to suppress or control fires and what can work now. It can even see where the heat perimeter of the fire is and where it is going, where hot spots are, where to drop retardant and where retardant has been dropped, and find safe routes for firefighters. All in real time.
It can even determine how much carbon is lost because the fire, what post-fire flooding and landslides will occur and where.
RADR is about as close to a Tricorder as we can come right now.
In 2020 alone, over ten million acres burned across the United States, a level three times higher than the 1990–2000 10-year average. Between fire suppression costs, direct and indirect costs, wildfires in 2020 cost the United States about $170 billion. Add in floods, hurricanes, and other natural disasters, and the toll of disasters on the livelihoods of Americans is huge.
RADRFIRE provides a round-the-clock, entirely automated workflow from image retrieval, analytics, and dissemination of data for all fires in the U.S., whether it’s in a remote wilderness area, communities in the wildland urban interface, a grass and shrub fire or a forest fire. Current wildland fire analytics operations mainly rely on nighttime, aircraft-based image collections and human analysts for image interpretation.
RADR takes advantage of the different orbital tracks and timing that satellites take. It also uses several experimental sensors aboard the International Space Station, which provides further differentiation in time as the Station takes a different orbit than other satellites, allowing greater imaging coverage.
America is at a National Preparedness Level 5 – this is the highest activation level, meaning the majority of wildland fire resources are committed with >25k wildland firefighter and support personnel involved. For aircraft imaging support, this means not every fire will get the aerial mapping it needs. This is where RADRFIRE capabilities really help out, as every single fire in the country is automatically imaged and analyzed, regardless of the size, complexity, or location of the fire. RADRFIRE is cloud-based, so as the number of fires increases, we can scale-up the system to keep up with the demand.
While focused on the United States right now, this technology can be used anywhere in the world, and on other disasters besides wildfires, because these are global satellites.
Dr. Andre Coleman and his team of researchers at PNNL are part of the First Five Consortium, a group of government, industry, and academia experts committed to lessening the impact of natural disasters using advanced technologies. Coleman and team are expanding RADR’s operational analytics and modeling to mitigate damage to key energy infrastructure.
Using a combination of image capturing technology (satellite, airborne, and drone images), artificial intelligence (AI), and cloud computing, Coleman and the team work to not only assess damage but to predict it as well.
The figure above is a good example of RADRFIRE that is being used right now. It is a RADR image of the Jack Fire (north) and the Devils Knob Complex (south) in Oregon showing various key elements. Notice the description of the fire impacting some high-voltage transmission lines. The well-known Crater Lake Park is visible in the lower-right (southeast) corner of the image. The RADR-Fire analytics were generated from the European Space Agency Sentinel-2 earth observation satellite collected on August 15, 2021 @ 11:59 PDT. A single image from Sentinel-2 can collect an area of ~4,650 square miles (68 x 68 miles) with successive image scenes along the satellite orbit being collected only seconds apart. The inset show an example of the mapping detail captured on the wildfire.
Prediction takes more than just images. So RADR combines the imagery analyses with weather, fuel, and forecast data. For example, wind, vegetation, and anything a fire can consume all factor into the size of a fire and the direction it takes. By marrying imagery with fuel data and wildfire models, one can accurately predict the path a fire takes.
Of course, these assessments need to get in the right hands as fast as possible. Coordinating a response requires local, regional, and national resources, each in different locations but needing the data as quickly as possible in a format that can be readily accessed and interpreted, particularly in a data communication constrained environment. A cloud-based system provides an end-to-end pipeline for retrieving available imagery, processing the analytics, and disseminating data to be used directly in a user’s own software, through desktop web browsers, and/or via mobile applications. Added visual analytics produce images and datasets that can be easily discernible to a wide audience of responders.
As Coleman puts it, “This process uses an application programming interface (API) that enables our system to communicate with other systems around the world. Once the imagery is available, it is retrieved from the sensor operators to our cloud system where it is pre-processed (different set of steps for each satellite sensor), then a set of trained machine-learning (AI) codes are used on the imagery to extract and classify data according to the standards of the National Wildfire Coordinating Group – these are comprised of intense heat (the fire front), scattered heat, spot fires, and the heat perimeter (or the overall impact area). These machine-learning models use the unique electromagnetic spectral signatures of the data to derive the required information. These classified data are then converted from an image-based format to a vector format so the file size of the data products are as small possible, such that they can be transmitted in limited data communication environments.”
“The Geographic Information System (GIS) files are stored in an open-source geospatial database and are provided as Open Geospatial Consortium (OGC) data services, where websites, apps, and GIS software can connect and retrieve and be used in existing wildland fire management workflows. Information regarding the fire, the date and time of the imagery collection, the satellite the collected the imagery, the algorithm and version, and other data are included in the data so it is self-documenting for the end-user. The system is designed for flexibility such that multiple kinds of end-users can use the data and information in a variety of ways.”
The public can use the interactive site below to view some of this information. The figure below is a screen shot of this.