By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI Breakthrough Sharpens Telescope Images-Astronomy’s Next Big Leap | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > AI Breakthrough Sharpens Telescope Images-Astronomy’s Next Big Leap | HackerNoon
Computing

AI Breakthrough Sharpens Telescope Images-Astronomy’s Next Big Leap | HackerNoon

News Room
Last updated: 2025/05/07 at 4:57 AM
News Room Published 7 May 2025
Share
SHARE

Authors:

(1) Hyosun park, Department of Astronomy, Yonsei University, Seoul, Republic of Korea;

(2) Yongsik Jo, Artificial Intelligence Graduate School, UNIST, Ulsan, Republic of Korea;

(3) Seokun Kang, Artificial Intelligence Graduate School, UNIST, Ulsan, Republic of Korea;

(4) Taehwan Kim, Artificial Intelligence Graduate School, UNIST, Ulsan, Republic of Korea;

(5) M. James Jee, Department of Astronomy, Yonsei University, Seoul, Republic of Korea and Department of Physics and Astronomy, University of California, Davis, CA, USA.

Table of Links

Abstract and 1 Introduction

2 Method

2.1. Overview and 2.2. Encoder-Decoder Architecture

2.3. Transformers for Image Restoration

2.4. Implementation Details

3 Data and 3.1. HST Dataset

3.2. GalSim Dataset

3.3. JWST Dataset

4 JWST Test Dataset Results and 4.1. PSNR and SSIM

4.2. Visual Inspection

4.3. Restoration of Morphological Parameters

4.4. Restoration of Photometric Parameters

5 Application to real HST Images and 5.1. Restoration of Single-epoch Images and Comparison with Multi-epoch Images

5.2. Restoration of Multi-epoch HST Images and Comparison with Multi-epoch JWST Images

6 Limitations

6.1. Degradation in Restoration Quality Due to High Noise Level

6.2. Point Source Recovery Test

6.3. Artifacts Due to Pixel Correlation

7 Conclusions and Acknowledgements

Appendix: A. Image restoration test with Blank Noise-Only Images

References

ABSTRACT

The Transformer architecture has revolutionized the field of deep learning over the past several years in diverse areas, including natural language processing, code generation, image recognition, time series forecasting, etc. We propose to apply Zamir et al.’s efficient transformer to perform deconvolution and denoising to enhance astronomical images. We conducted experiments using pairs of high-quality images and their degraded versions, and our deep learning model demonstrates exceptional restoration of photometric, structural, and morphological information. When compared with the ground-truth JWST images, the enhanced versions of our HST-quality images reduce the scatter of isophotal photometry, Sersic index, and half-light radius by factors of 4.4, 3.6, and 4.7, respectively, with Pearson correlation coefficients approaching unity. The performance is observed to degrade when input images exhibit correlated noise, point-like sources, and artifacts. We anticipate that this deep learning model will prove valuable for a number of scientific applications, including precision photometry, morphological analysis, and shear calibration.

Keywords: techniques: image processing — galaxies: fundamental parameters — galaxies: photometry — galaxies: structure — deep learning — astronomical databases: images

1. INTRODUCTION

A deeper and sharper image of an astronomical object offers an opportunity for gaining new insights and understanding of it. One of the most notable examples is the advent of the James Webb Space Telescope (JWST), launched in 2021, which has since continuously expanded our knowledge of the universe through its discoveries, thanks to its unprecedented resolution and depth. This trend is reminiscent of a similar era three decades ago when the Hubble Space Telescope (HST) became the most powerful instrument available to us at that time.

Astronomers’ ongoing efforts to enhance the depth and clarity of astronomical imaging extend beyond advancements in instrumentation to include significant developments in the algorithmic domain. Early techniques relied on straightforward Fourier deconvolution techniques (e.g., Simkin 1974; Jones & Wykes 1989). A primary challenge with this approach is noise amplification, and linear filtering was suggested to remedy the frequency-dependent signal-to-noise ratio issue (e.g., Tikhonov & Goncharsky 1987). However, the filtering method provided only band-limited results. With the development of computer technologies, more computationally demanding, sophisticated approaches based on the Bayesian principle with various regularization schemes were introduced (e.g., Richardson 1972; Lucy 1974; Shepp & Vardi 1982). Some implementations of these Bayesian approaches were shown to outperform the early Fourier deconvolution methods. Nevertheless, the regularization could not recover both compact and extended features simultaneously well. Multi-resolution or spatially adaptive approaches were suggested to overcome this limitation (e.g., Wakker & Schwarz 1988; Yan et al. 2012).

In recent years, the emergence of deep learning has significantly impacted the general field of image restoration and enhancement. Also, a growing number of studies are reporting notable results in astronomical contexts as well (e.g., Schawinski et al. 2017; Sureau et al. 2020; Lanusse et al. 2021; Akhaury et al. 2022; Sweere et al.2022). Deep learning algorithms, particularly convolutional neural networks (CNNs; Krizhevsky et al. 2012), have shown promising results in automatically learning complex features from images and effectively enhancing their depth and resolution (e.g., Zhang et al. 2017; D´ıaz Baso et al. 2019; Elhakiem et al. 2021; Zhang et al. 2022). By training on large datasets of both raw and enhanced images, deep learning models can learn intricate patterns and relationships within the data, allowing for more precise and tailored image enhancement.

Deep learning techniques offer advantages over traditional methods, such as Fourier deconvolution and Bayesian approaches, by providing greater flexibility and adaptability to diverse datasets. The integration of CNNs with other deep learning architectures, such as generative adversarial networks (GANs; Goodfellow et al. 2014) and recurrent neural networks (RNNs; Williams & Zipser 1989), has further expanded the capabilities of image enhancement (e.g., Ledig et al. 2016; Schawinski et al. 2017; Liu et al. 2021; Tripathi et al. 2018; Alsaiari et al. 2019; Rajeev et al. 2019; Wang et al. 2020; Tran et al. 2021; Kalele 2023). These hybrid approaches enable the generation of highly realistic and detailed images, pushing the boundaries of what is achievable with traditional methods alone. One of the outstanding limitations of the CNN-based model is its restricted receptive field. That is, long-range pixel correlations may not effectively be captured by the model. Another critical drawback is its static weights, which prevent effectively adapting to input content during inference.

The Transformer architecture (Vaswani et al. 2017), which has revolutionized the field of deep learning over the past several years in diverse areas including the well-known large language models, can be considered a potent alternative to overcome these limitations of the CNN-based models. However, with its original implementation structure comprised of so-called self-attention layers, it is infeasible to apply the Transformer model to large images because the computing complexity increases quadratically with the number of pixels.

Zamir et al. (2022) devised an innovative scheme to substitute the original self-attention block with the multi-Dconv transposed attention (MDTA) block. The MDTA block, implementing self-attention in the feature domain rather than the pixel domain, ensures that the complexity increases only linearly with the number of pixels, making its application to large images feasible. Zamir et al. (2022) demonstrated that their model, named Restormer, attained state-of-the-art results for image deraining, single-image motion deblurring, defocus deblurring, and image denoising. However, its performance has not been evaluated in the astronomical context.

In this paper, we propose to apply Zamir et al.’s efficient transformer to perform deconvolution and denoising to enhance astronomical images. Specifically, we investigate the feasibility of enhancing HST images to achieve the quality of JWST ones in both resolution and depth. We build our model based on Zamir et al. (2022)’s implementation and employ the transfer learning approach, initially pre-training the model using simplified galaxy images and then finetuning it using realistic galaxy images.

Our paper is structured as follows. In §2, we describe the overall architecture of Restormer and the implementation details employed in the current galaxy restoration model. §3 explains how we prepare training datasets. Results are presented in §4. We show the results when the model is applied to real HST images in §5 and discuss the limitations in §6 before we conclude in §7.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Adobe Premiere Pro vs. Apple Final Cut Pro: Which Video Editing App Should You Use?
Next Article AI startup founded by 18-year-old Nia closes LocalGlobe-led round – UKTN
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

De’Longhi’s Newest Super-Automatic Espresso Machine Is Probably Its Best Yet
Gadget
Manage Complex Projects With Ease for Just $15
News
Exciting New Features in Gain: Mixed Media Posts & Instagram Grid Previews for Clients – The Gain Blog
Computing
Grab a refurbished MacBook Pro for just $330 while supplies last
News

You Might also Like

Computing

Exciting New Features in Gain: Mixed Media Posts & Instagram Grid Previews for Clients – The Gain Blog

3 Min Read
Computing

The TechBeat: How to Tell if AI Really is a Revolution (5/11/2025) | HackerNoon

5 Min Read
Computing

vs. Calendly: Which Planner App is Better? |

27 Min Read
Computing

GSoC 2025 Projects: AI-Powered Log Analyzer For Fedora, Better AMD ROCm On Debian

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?