By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: A Thirteen Billion Year Old Photograph
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > A Thirteen Billion Year Old Photograph
News

A Thirteen Billion Year Old Photograph

News Room
Last updated: 2025/10/01 at 6:46 AM
News Room Published 1 October 2025
Share
SHARE

Transcript

Dr. Kenneth Harris: We’re going to take a journey. We’re going to talk about how the hardware and the software blend together in this really cool and really expensive observatory, this $13-billion observatory. Picture with me, it’s 1800, and a German-British astronomer has just stumbled upon something that’s going to help us see the universe in a totally different light. His name is Sir William Herschel, and he’s experimenting with light, specifically wavelengths at different temperatures.

Imagine he’s looking at the spectrum of light. He puts his thermometer at the very end of the longest wavelengths, what’s going to be your red, your hot wavelengths, and he realizes that his temperature of his thermometer continues to increase in temperature. What does this tell us? This tells us that there’s something beyond what we can physically see. There’s something beyond that red wavelength that we can see. He coins this term as cleverly as possible, invisible heat.

It then turns into what’s known as dark heat. It’s known as dark heat throughout the whole 1800s into the early 1900s until we know it today as infrared light. Infrared light is the wavelength or a spectrum of light that goes beyond the visible light spectrum. Like so many other experiments, Sir William Herschel was not actually looking for this infrared light. He couldn’t, it’s invisible. He was not looking for infrared light, but he stumbled upon it like many great discoveries that you find throughout human history. This is going to be the image of the Tarantula Nebula. It’s called the Tarantula Nebula because it looks like almost a hollowed-in part of the web. Why is it called the Tarantula Nebula? We’re going to take a step back, infrared light to help us see some of the earliest galaxies or earliest stars within the galaxy. We’re going to take a look at a 13-billion-year-old photograph.

Why do I bring up Sir William Herschel, other than the fact that he is the one that stumbled upon this dark heat, this invisible light, or this invisible heat as it was so-called? It wasn’t until the 1900s where a Hungarian physicist by the name of Kálmán Tihanyi invented the world’s first infrared camera. The first infrared camera was used for military purposes, because, again, what isn’t used for military purposes that eventually comes over to science? Used this camera for military purposes where the military was able to see in night vision. Any of my military folks or any of my gamers that play Call of Duty, things like that, infrared technology. It’s often misconstrued as thermal vision. It’s not thermal vision.

This information was classified as top secret in the 1920s, declassified in the 1950s, 1960s, where we see it expanded to these three bands that we’ll talk about today, near-infrared, mid-infrared, and far-infrared technologies. Here’s the question. How has this discovery helped us to understand the universe even better? Why does infrared light matter in the grand scheme of the universe? We’re going to have an amazing time today. We’re going to unpack this. We’re going to talk about not only the technology that we develop but how it quite literally mimics the human body and parts of the human anatomy to help us observe the deep universe.

Background

My name is Dr. Kenneth Harris. I’ve been within this industry for the past 17 years. That picture at the top left there is me, 16 years old, doing some of the first work on the James Webb Space Telescope, and in the bottom right photo there is right before it launches. To be able to work on the James Webb Space Telescope was one of those life-changing events. It’s one of those once-in-a-lifetime changing events. Again, talk about the bridge between the mechanical aspect and the software aspect, because it’s my belief that software engineers as it be are the folks that help us develop the brains of the mission. Without software engineers, all we’re doing is sending these dumb boxes into space to orbit at thousands of millions of miles per hour. Software engineers are the backbone of everything.

High-Level Overview (The James Webb Space Telescope)

Jumping right into this technology. We need to first understand how the observatory processes data so that our eyeballs can really appreciate it. Has anyone heard of the James Webb Space Telescope? Has anyone worked on the James Webb Space Telescope? The James Webb Space Telescope is the world’s most powerful and complex space observatory to date, that is like an asterisk because we’re working on the Nancy Roman Space Telescope or observatory which is going to be the next biggest and best thing. It’s almost like the Batman to Robin, huge field of view that James Webb can then zoom in on and get some great imaging from. Check out Nancy Grace Roman Telescope if you have not heard of it. James Webb is orbiting in an L2 orbit or Lagrange 2 orbit, which is 1.5 million kilometers from Earth.

The reason we need to do that is so that it’s away from all the light and other things that can blur the images that are up there. Running through these images, top left is going to be an artist rendition of a pass-by of our satellite. Bottom left, we had to actually fold this thing up to fit it into the launch vehicle and into a rocket. This image in the middle right here is us loading it into what’s known as Chamber A, it’s at the Johnson Space Center in Houston, Texas. We’re loading it in there three days before Hurricane Harvey actually touched down in Texas. You can tell we’re all a bit nervous. I took that picture and I left, I went home, and they put it in the chamber. The image over there, that’s Brian taking off the lens cap. That’s a really important piece right there. Remember that that area he’s looking at right now, that is the entrance to our instrument module. That is how the light funnels into our instruments that help us to process that information and see this distant universe, these distant galaxies.

Cosmic Redshift

What are we studying? We’re studying light, more specifically photons, more specifically wavelengths. Fun dinner topic, if you just want to nerd out with some friends or just impress some friends, cosmological redshift is what we’re studying, or cosmic redshift. What is cosmic redshift? As the universe constantly expands — the universe is constantly expanding — the light from distant sources stretches to further and further or longer and longer wavelengths. As I mentioned earlier, red is the longest wavelength that we can see. Once it stretches beyond that it’s taking the infrared light. Also, the universe isn’t empty. The universe is exceedingly dusty. It’s dusty because things are constantly blowing up, maybe not near us, but constantly blowing up. For those reasons there’s always debris and things like that floating around. Light also has to peer through dust in order to get to our eyeballs.

On top of getting to our eyeballs, it has to go through the atmospheres and then the layers of the planet as well, which is why we put James Webb in this orbit, which is 1.5 million kilometers away because it doesn’t have to deal with a lot of those things. Cosmological redshift is the phenomenon known in the galaxy that we are trying to overcome using these onboard instruments, using this onboard software and things of that nature. It was first discovered and first studied by Edwin Hubble. Does that name sound familiar to anyone? Everyone’s heard of the Hubble Space Telescope? The James Webb Space Telescope, followed to the Hubble Space Telescope. Hubble did not exclusively do visible light, but it did visible light and some bands of infrared light. James Webb does a lot more infrared and very little if any visible light. We’ll talk about how it does near-infrared as well as mid-infrared through the instrumentation that it uses on board.

How Does It Work?

How does this thing work? Think about your eye. Your eyes observe the world around you by taking in light, takes in photons that are then processed. The photons are processed by your visual cortex, which is in the lobe at the back of your brain known as your occipital lobe. Light comes in, processed by photons, occipital lobe in the back of your brain. The James Webb, light comes in the front, process and instruments in the back. The way the light comes in, if you follow the arrows on the screen, light comes in, hits OTE which are our primary mirrors, those gold mirrors you see. They’re not really gold. They’re actually beryllium. They’re gold plated. They have about as much gold as you can fit in a golf ball. A golf ball’s worth of gold on this telescope. Light comes and hits the mirrors, bounces off those gold plated mirrors, bounces into the secondary mirror, which is at the end of our long boom, and then that light funnels into that place Brian was taking the cap off of.

That location funnels it into our instruments on the backside. We have four instruments on board James Webb, a combination of cameras and spectrometers which help us not only observe the galaxy through cool pictures, but also data to help us observe it on molecular levels as well. That’s the cold side. To understand this observatory just a bit more, we also need to understand that it’s divided into two sides. The cold side which is going to be the top side. We are unbiasedly cold side. We support the cold side because that was my side. We’re going to unbiasedly support the cold side here. This side here contains your primary mirrors, your secondary mirrors, all your instrumentation.

The bottom or the hot side, we are really clever with this. The hot side are the bottom of your sunshields, your star trackers, your antenna, communications, how you get all that data back down to Earth. It’s called the hot side and the cold side because of its orientation to the sun. The cold side faces away from the sun. The hot side faces the sun. There’s about a 16-meter difference on our sunshields, and there’s a 300 Celsius degree difference between the two. On the hot side, you’re at about 85C. On the cold side, you’re about -233 C. You have to have the cold side at this temperature, because, remember what I mentioned earlier, infrared often misconstrued as thermal vision, heat, and things like that. Any heat that comes off of the satellite can impede with the readings that we’re getting from these galaxies. That’s why hot side, cold side. Light comes in, lobe on the back. That’s how we process. Keep that in mind.

What’s Inside the Module?

Four instruments on the back of this satellite. We have NIRSpec, NIRCam, MIRI, and it’s called FGS/NIRISS, but we just call it NIRISS. You got four on the back. Your two at the top are going to be your near-infrared cameras, and I’ll get into this later. Then your one on the right is going to be MIRI, which is your mid-infrared. NIRISS is more or less your GPS, think about it as GPS, but it also has a spectrograph within it. We’re going to break down each of these instruments again, so you can understand how we take the data from each of these instruments, process it through. Then I’ll do a little demonstration at the end of how we actually build one of these images. The four instruments on board are actually made up of a combination of sub-instruments as well. Each one has some mix of these sub-instruments within it. I’m going to walk through what each of them do because then it will help you again understand how we take that light, process it through.

Aperture mask, think about it as, if you have multiple stars, when we think about stars that are really close together, really bright stars, so they are together. The aperture mask helps you to pinpoint each star individually, extract the data from it, and then remove the interference between them. Interferometers are what it’s called technically in the scientific community. Aperture mask actually emulates that by taking those two bright stars, separating it, you do the interference, and then you can actually track both of these exceedingly bright stars at one time. You think about micro shutter arrays, micro shutter arrays quite literally on James Webb, we have, think about them as little trap doors, 248,000 little trap doors that open and close mechanically at any time. Each of these little trap doors can pinpoint different stars in the galaxy and study them independently based on which doors open.

This micro shutter array allows us to explore a wider swatch of the galaxy at one time rather than these individual pictures like Hubble takes, for example, or individual pictures that astrophotographers will need to take. Your integral field unit, it’s actually a combination. It’s a combination between a camera and a spectrograph or a spectrometer. What these things allow you to do is not only take a picture of, let’s say, an exoplanet. You take a picture of an exoplanet, think of Kepler, for example. Take a picture of a Kepler planet. You not only get that image back. You also examine it pixel by pixel so you can actually get the molecular makeup of the atmosphere from it by utilizing an integral field unit. Spectrograph you think about Sir William Herschel earlier.

Sir William Herschel used these prisms to understand light. It takes the light. It divides the light into different wavelengths, and that you’re able to examine each wavelength of light that comes through it. Why is that important? Because now we can find out hypothetical colors of these universes, of these galaxies. When you see those pictures come back, it’s because we’ve analyzed the wavelength. If the wavelength is short, it’s typically a bluish color. If the wavelength is long, it’s typically a reddish color. Again, we’ll get into that. Got a fun chart that will actually help explain that just a little better there.

Coronagraphs, if you’ve seen any of our experiments that explore looking at the sun, for example, coronagraphs are quite literally, you take like an opaque layer, you place over the exceedingly bright star or a place you’re looking at, and you observe what’s around it. How do we understand solar flares, things like that? How do we understand what the sun might look like? It’s almost like an eclipse. Think about those as eclipse. Then, cameras. Cameras are cameras. James Webb has only three cameras on board, two near-infrared cameras, one mid-infrared camera.

Like I said, each instrument broken down into different flavors. You start with MIRI. That box on the left there is what it looks like inside the instrument suite. MIRI is broken up into these four components made of spectrographs, cameras, integral field units, and coronagraphs. We’ve got the microns for the area of the infrared scale that it’s able to detect. We put that there just because, again, wavelengths. That’ll make more sense when I do the comparison of the two. Again, as a combination of a camera and a spectrograph, MIRI is the only one that detects light in the mid-infrared section of the spectrum, which is really necessary to not only observe unique parts of a star system or unique parts of exoplanets, it helps us discover some of the dustier, cooler parts. If you think rings around planets, so if an exoplanet has a ring around it, we’ll typically use MIRI to see the ring around the planet, for example.

Moving into NIRCam, it’s the only near-infrared camera that also has a coronagraph on it. When I mentioned the coronagraph earlier, so it can look at really bright areas. Most of the images that you see from James Webb are typically taken through NIRCam. Any of the exoplanets that you hear or discover from James Webb are typically discovered through NIRCam because it’s got that coronagraph. We know that exoplanets need what typically to survive? A star. A hot star that it’s orbiting around, typically. The only way to see that hot star is to use the coronagraph, block out the sun, observe the exoplanets, see how big they are, how far they are, and then determine if they are in a Goldilocks zone or if they’re not in a Goldilocks zone. Goldilocks zones are the zones around stars that water could potentially form on the surface.

Jumping into the next one. This is, again, NIRISS. NIRISS has got the combination of aperture mask, spectrographs, and the cameras. It’s going to be the GPS for the system. When I start talking about your science data packets that you get back from James Webb, for example, your science data packs will often contain PNT packages, position, navigation, and timing packages from your satellites. Typically, you’ll get those from NIRISS because in order to know where you’re pointed in the general universe, you need a guiding star. NIRISS points at that star, locks onto that star, does calibration, and then will observe around that star, point to another one, lock on that star, same thing, copy and paste.

Then our final instrument is going to be NIRSpec. NIRSpec, it’s like almost the boosted version of NIRCam. Instead of a coronagraph, we gave it that micro shutter array, and it’s a spectrograph. You don’t block out the sun with this, but instead you’re using that micro array in order to observe mini parts of where you have blocked out the sun with the coronagraph. Micro shutter array, again, has those multiple eyes. We actually designed it around the common housefly eye. If you think of a housefly’s eye, it has many different ways you can look at it. Think of that as a micro shutter array. It’s designed off of a housefly’s common eye.

I’m going to walk through just showing an example of what each of these instruments look like. The top left is going to be NIRSpec, the bottom right is going to be MIRI. What you see now is light entering the actual telescope. Light’s going to be entering the telescope. It’s going to phase through our second flat mirror, it’s going to be detected by the beam, of our FORE optics, which are basically just the area, the first bounce of the light. From there we get the initial image of the universe. You saw that color wheel that just spun on MIRI, that’s the filter it goes through to help us analyze whatever wavelengths are going to be used in the actual data processing package.

From there, the wavelengths actually split into about 26 to 30 different filters, based on what the parameters were that we’re able to set. James Webb is an interesting telescope because it works through the premise of folks out make proposals to say, I’d like to study this galaxy with James Webb through these filters, for example. You set those parameters and that’s how we know what filters to utilize. I’ll get into this in a bit. That’s narrow band observation. There is wide band observation or broad band. There’s broad band observation as well, which just means shoot it through all the filters, whichever ones bounce back wavelengths, I’ll determine those colors. You use narrow band, for example, when you’re looking for very specific molecules in the universe. We’re looking for hydrogen, for example, where you’re looking for something like that, you’ll say, I need you to shoot through filter 330w, for example, and it’ll come back and give us that data.

The Visible Light Spectrum

We have all this light, we have all these photons, but it doesn’t really matter if we don’t have a system that matches up with our eyes. Everyone knows what this is, it’s the visible light spectrum. Everyone remembers this growing up, it’s ROYGBIV. Sensibly, it’s really ROYGBV, because you think about it, Isaac Newton, who was the one that came up with this visible light spectrum, just popped indigo in there. He had a mystical fascination with the number seven, and so decided indigo, it goes right here.

For us, within James Webb, we actually decided to not do ROYGBIV, but decided to do ROYGCBV. We added cyan into that. Cyan is going to come right before the B in ROYGB. For all intents and purposes, we wanted to include cyan, because with shorter wavelengths you often find that bright blue hue of light as opposed to the darker blue. Again, we need to first understand the visible light spectrum to then understand how we layer it with infrared to then interpret color in the universe. We humans, think of your eyeballs, we see colors in three sensors. This is a picture of your retina. We’re just going to do two basic things to understand how we came up with designing the instruments for James Webb. Your eyes see color through the cones in your eyes. We have three color sensors in our eyes, red, green, and blue.

Growing up, when you hooked up your TV to your Nintendo 64 or whatever it was, the video adapters on the back of your TV were red, green, and blue. I told that at a high school once and they all looked at me like, what are you talking about? I had to go the HDMI route. You got red, green, and blue from your shortest wavelengths to your longest wavelengths. The fun thing about green, it also interprets the brightness. Besides the fact that it’s meant to mimic like a VCR, a DVD player, things like that, our human eye actually interprets light in that way from the cones. The problem is that your cones have a higher light threshold than your rods. Rods determine the grayscale, so blacks and whites are your rods.

Then you layer that with color and you get your cones. Your cones do the color aspect of it. It’s why we have things such as color blindness. It’s the absence or the damage to a cone or a sensor within your eye. Think about that as the four instruments or the three instruments on James Webb to help you determine how we look at grayscale versus how we look at the color scale that is your eye.

One important thing to remember about James Webb is again, thinking about your eye, think about your eye as a bucket. It’s constantly taking in light, which is how you know that I’m up here giving a presentation right now or you can see that there are slides on the screen. Your eyes can only take in a certain amount of photons, a certain amount for the bucket. James Webb is a much larger bucket, many more photons, a lot more light that you can take in and process, so a lot larger of an area.

When you think about that bucket, when you think about the visible light spectrum, when you think about the infrared light spectrum, how do we make a color picture in space? Again, remember we see in three colors that you can then mix in different ways to really interpret any color in the universe, red, green, blue, so three buckets. We’re going to take the visible light spectrum matched up against the infrared light spectrum and we’re going to divide that into three buckets, one on the far left, one on the far right, one in the middle: near, mid, far infrared light. We’re going to take those buckets, and then we assign filters to them, your red, green, and blue filter. You mix them in a certain way and that’s how you get colors in the universe. You have short wavelengths that identifies this, longer wavelengths that identify as blue and red, and what’s in between those two images. What you come back with is this assignment of lights from your shortest wavelengths to your longest wavelengths.

The chart at the very bottom is going to be your narrow bands that I explained earlier, and all the way at the top are your broad bands. With each of these filters that you saw on the color wheel that are spinning, you take light in from the galaxy, it filters through these filters. The filters that respond in certain ways get assigned certain colors, and ones that don’t, do not get assigned those colors. That’s how we know how star systems are young versus how star systems are old. Your young ones, your ones that are just being born or just reaching your eye are blue. Your older ones are typically red. When you think of a 13-billion-year-old image, it’s going to be red. Again, it’s going to be red because of that cosmological redshift, as it’s stretched over and over throughout the galaxy. Keep this image in mind as we get to that demonstration, so you can recall why certain colors are being assigned to certain layers of this image.

How do we go from an image like this which you can barely see, just there’s a black image — again, think of your rods, it’s a black image — to something like this, it’s fully composite, fully layered image? We first have to divide it into components that, again, our eyes can appreciate, and that’s through multiple layers. That first layer there, that gray layer is going to be known as image stacking. Image stacking, star alignment, things of that nature that we need to go through in order to take these oftentimes hundreds of pictures and stack them all together.

Each day the download from which we get, probably, we can get it hourly because it’s on an international download link. It’s either between hourly or every 6 hours, is about 250 gigs of data every time it drops data to us. That’s the data package typically, not per shot, but per analysis that we have. It’s 250 per. When you think about things like that, you think about just the steps we have to go to, to get from, again, totally black picture, which we’re going to stack into this composite image that you see again on the right there.

Data Packets: File Formats and Products

Moving into stuff that I think the general software group will care about. Now that we filled in all of this, how does the instrument work? How is the design from our eyes? How do we take light? How do we process it? Where do the colors come from? How does it get to the telescope? Now we have to understand the file formats and the data packages that are delivered through the hot side, through the antennas to our ground station. Your data packages are typically delivered in four types of formats. This is going to be an example of what it looks like to download from just one filter of James Webb. Look at that second folder down. It reads JWST02731. At the very end, it says f200w. If you recall from my color slide, f200w is a filter, typically a smaller wavelength filter on that. We know that that filter captured this set of data. That’s one filter. We’re going to look at the Carina Nebula. Carina Nebula is a star birthing place. There’s a lot of young stars in it. It’s a really popular image. We’ll take a look at it.

In this file, you only have three in a manifest. You got a JSON file, you got an ecsv, and you have a FITS file. The FITS file that we really want to look at is that i2d file. That’s going to be the one that contains not only the calibration stack, but also the image stack that we’re going to have. I’m going to walk through what each of these files mean, even though you might be familiar with them. We got four file types. The first one is FITS, Flexible Image Transport System. The way our FITS files are presented to us is with a header and a binary code. This header and binary code just tells us information about what it was looking at, the parameters, again, that were set against it before it was pushed back to us. It typically comes down in one of these manners. You can look at things like calibration, the space altitude, oftentimes the calibration data, the coordinates that were associated with the program, the observation.

Things like that, is what is pushed down to us. Then the associated binary code with it. That’s how we process that FITS data. The second one is going to be your JSON file. Everyone’s familiar with JavaScript in some way. We use this as a calibration. This is going to be a calibration for the actual image that we get back. Your ESCV file, it is an enhanced CV file. It’s going to be your huge data dump. Your coordinations for what you shot, your parameters for what filter you shot it through, your very jumbled Excel file when you open it up. Your ASDF file, that’s going to be your metadata. Just like when you shoot on your iPhone, for example, and it tells you, I was standing at QCon and I took this picture at 9:00 or 10:00 in the morning, and it was on this date. That same metadata applies to a satellite when you’re shooting it in that frame.

The reason the ASD file is not featured in this screenshot we have here, is the ASD file is typically rolled into the FITS file. It’s typically rolled into one of those two FITS files that you see there just because it exists as metadata. The MANIFEST, it’s not super important. The MANIFEST is something that comes with each and every image. It’s not important. You’ll see it if you try it.

Making a Full Color Image

How do we make these photos? Comes down to six steps. Image stacking that I said earlier, which is image stacking, image alignment. A lot of this is artist liberties. You look into the universe, there’s no up, there’s no down. If you see an image and you think that looks better flipped over, the artist is going to flip it over. The first thing is image stacking. You’re looking at all the files that you get from your FITS file. It comes in typically like 9 or 10 photos that are a combination of calibration and the actual image that you want. You’re going to stack those together. You’re going to align them so that they’re pretty. The next thing you’re going to do is from your shortest wavelengths to your longest wavelengths. You’re going to do your blues. You’re going to do your greens, your orange, and your red. I include orange in this because the one we’re actually looking at has a lot of oranges.

For that reason, we decided to include orange in this. Then, finally, step six, you’re going to get your fully calibrated composite image that you’ll have there. Like I said, for the purpose of this, we’re going to do the Carina Nebula. We have a repository that almost all of our observatories go through, it’s called MAST. It does not only include the information that James Webb gives us back, but includes Hubble, includes Spitzer, includes a number of different missions that are up there. This is all possible through the combined study of not only NASA, but ESA and SETI and things of that nature, that have contributed the data to get us there today. This is going to be a review of an image that myself and a good friend put together of the Carina Nebula. We’re going to do a run-through of it.

What you’ll see here, if you follow my cursor at the top, what we’re going to go through is, first, we’re going to enter a target that we want to do. You can do the Cat’s Paw Nebula. You can do the Tarantula Nebula. You can do some of the oldest images of the universe, which is SMACS 0723. We’ll get to that in a bit. For today, we’re going to do NGC 3324, which is the Carina Nebula. We’re going to pull up that nebula, fast forward to this. As it comes up, you’ll see what’s known as an Astro view on the right. This Astro view is the first interpretation of that swatch of the universe you want to look at. You saw how that just populated. That’s all the information coming up from all the data that’s ever been shot in that area. You see, I zoom out a bit, it gets further and further. You can actually filter it down based on the mission type on the left there. You see, just for today, we want James Webb. You see, it’s a lot fewer images now.

If you blow it back out to Spitzer, for example, which was up there for years, you have much more that you’re actually looking at. For today, we’re just going to look at James Webb. You get James Webb. If you actually click on that rectangle or the circle or the box, whatever it is, it’ll highlight the data packages that you need for that swatch of the universe that you want to look at. You see, we’ve got about six files there. I already filtered it. I didn’t filter down to science data only. You should typically filter down just to science data.

If not, you’ll get the calibrations, which, again, is just random shots. Then you observe around that shot. There’s a ton of other options. You see me scrolling down. There’s a ton of other options there where you can do the target name, you can do a wave list, you can even do the filters that you want to do. This is an important part because we can classify based on either NIRCam, or MIRI, or whatever the instrument is at this time.

Again, remember, most of the photos that you see come from NIRCam. We’re going to do NIRCam for this particular experiment. You got NIRCam and you boil it down to those six. The next step that’s going to happen is I’m going to download these things. There’s another point where you can actually filter them down if necessary. This column here that I’m pointing at here, recall the filters, that chart I talked to you about earlier, if you want to look at just specific colors, you go to the filter and you say, I want to just look at the long wavelengths. It’ll give you an example of just looking at the long wavelengths.

Let’s skip to the next one here. The next thing you do after you download all your data, after you download these really big data packages or whatever, you can do a number of different platforms that you can build this on. For this experiment, I’m going to use PixInsight, but you can use SAOImageDS9 imaging and Siril as well. I’ll give you links to that. I’m going to use PixInsight. It’s opening up through PixInsight. This is opening up the i2d FITS file that I mentioned earlier. You see it has a number of images that are layered on for the i2d FITS file. You open up the i2d FITS file. I’m closing out the top layers because they’re just black. They’re just black images that were stacked on that we can’t use.

The image here is a calibration. You can see it’s a calibration because it looks like it’s moving through those slits. We don’t need the calibration file for this. We don’t need the black files again. This is another calibration file. You can keep it for other purposes within what you’re trying to do. For our purposes here today, we don’t need it. What you do is you get to this last image here. I like PixInsight because it has this cool feature, auto-stretch, you can auto-stretch your image, which basically means it shifts the data from a point of being linear or just a whole bunch of data to a point of being nonlinear.

After you download all the images, you get what looks like this. This is the upside-down version of the Carina Nebula. Now you have to do each individual layer. The first one I did was a short wavelength. It’s about that 2w. This new one that we’re doing is the 187 wavelength. You have to do this for each layer. You have a blue layer. You have a green layer. You have a red layer. You have an orange layer. You might have another two blue layers, you have to do this. You stack them on top of each other in the actual program. You layer them on top of each other. You flip them upside down if they need to be flipped upside down. PixInsight has this cool star alignment feature that you can actually utilize where you can click a certain segment of what you’re doing, and it’ll actually align it to that segment.

Then once all those are aligned, you’ll see, I actually loaded it into Photoshop because we can use Photoshop. We load into Photoshop. You see the chart down there on this right side, all of those are your filters. You stack them from shortest to longest. Go through your filters. Then what you’re going to do is you’re going to mask each filter. You mask each filter to the corresponding color. Then once you get to the end of it, you look at your chart. Say, I have these filters, you assign them those colors. These filters actually have hex codes in a website that I’ll give you. You get through all of those colors. Then, at the end, once you finish doing all the masking, it looks something like this. This is the image of the Carina Nebula that you would have once you stack everything on top of each other. This is just the flow of our data going from uncalibrated to calibrated data. This is how you get an i2d file.

SMACS 0723 – The 13-Billion-Year-Old Image

Lastly, is the 13-billion-year-old image that we took with the James Webb Space Telescope. This is from an area of the galaxy known as SMACS 0723. It sounds almost like a child’s name on Elon Musk’s board of future children’s things. This image actually has over 45,000 galaxy stars within it. If you would extend your arm out in front of you, one index finger out like this for me? At the very tip of your index finger, rest a small grain of sand. We know that the universe is made up of millions and billions of galaxies. We know that galaxies are made up of hundreds of millions and billions of stars and all types of stuff like that. That small grain of sand was that original image I showed you, which was SMACS, which hosts those 45,000 galaxies in it. You’ve seen how far out we’ve zoomed since then to just see what the universe actually contains and how much more of it we need to explore, and how much more we hope to do so with that technology that we have.

Resources

These are resources that you can use. There’s how to process data products. There’s the MAST site where you can actually download all the data. There is a file format one in there where you can actually understand how to process each file. Then, PixInsight, Siril, and SAOImageDS9 are the three platforms that we would recommend aside from Astropy. In the very bottom, cool JWST images that you can check out.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Amid Trump immigration crackdown, Seattle startup Boundless acquires European competitor
Next Article Kindle Scribe Colorsoft vs Kindle Colorsoft: What’s new?
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

How to Improve AI Models While Training Only 0.1% of Parameters | HackerNoon
Computing
Google’s next-gen Home Speaker is here, and Gemini for Home isn’t just for the latest models
News
The Taliban government in Afghanistan rejects reports of a nationwide internet ban
News
You can vibe-code your way to innovation. Here’s how to start.
Software

You Might also Like

News

Google’s next-gen Home Speaker is here, and Gemini for Home isn’t just for the latest models

7 Min Read

The Taliban government in Afghanistan rejects reports of a nationwide internet ban

2 Min Read
News

These are the best AT&T deals you’ll find this October

5 Min Read
News

Meta will use your chats with AI to sell hyper-targeted ads – 9to5Mac

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?