Shiga University, Faculty of Data Science

Computer Vision Laboratory

Research Themes

Computer Vision-based Ocean Environment Monitoring

Computer Vision-based Ocean Environment Monitoring

We conduct research on automatically measuring marine environments, such as seagrass and seaweed beds, from aerial and underwater images using image recognition technologies.

Research Overview

Focusing on seagrass and seaweed beds along the Japanese coast, we aim to construct a “Digital Twin of Blue Carbon Ecosystems” by integrating observations of vegetation production and export, field surveys including water and sediment sampling from vessels, decomposition experiments, and ocean model simulations to quantify organic carbon dynamics derived from these ecosystems. Through this integrated approach, we seek to quantitatively assess carbon transfer from coastal vegetated habitats to offshore carbon reservoirs in Japan and to generate future projections under climate change scenarios, thereby contributing to climate change mitigation efforts.

Our research group develops image recognition technologies to automatically measure marine environments, such as seagrass and seaweed beds, from aerial and underwater imagery. These technologies will be utilized in the Digital Twin project to estimate ecosystem productivity and carbon dynamics.

Related Projects

Deep Learning-based Ocean Weather Forecasting and Its Applications

Deep Learning-based Ocean Weather Forecasting and Its Applications

We conduct research on ocean–atmospheric forecasting and its applications using deep learning.

Research Overview

The goal of this research is to develop a new class of technologies centered on an Earth System Foundation Model—a deep learning–based generative model trained using physical simulation outputs and data assimilation results as supervisory signals—along with downstream tasks built upon this foundation model.

To achieve this goal, we construct a foundation model that reflects the unique characteristics of meteorological data. Based on this model, we develop weather forecasting methods and learning-based data assimilation techniques. Furthermore, to validate the performance of the foundation model, we develop explainable weather forecasting methods built upon the model.

Related Projects

Pattern Recognition-based Potential Fishing Ground Prediction

Pattern Recognition-based Potential Fishing Ground Prediction

We conduct research on AI-based technologies to estimate “good fishing grounds.”

Research Overview

Our goal is to establish Marine Fisheries AI Technology (FishTech)—an integrated framework combining fisheries and oceanographic domain knowledge, marine sensing technologies, and AI—and to create a sustainable fisheries model that balances economic efficiency with resource conservation.

Sensor data collected during fishing operations are analyzed and processed using novel technologies that incorporate domain knowledge of fish ecology and ocean physics into pattern recognition and data assimilation techniques. Through this approach, we generate operational support information and mid-term fisheries management strategies.

Through collaboration between ocean meteorology, fisheries science, and information science—particularly image-based pattern recognition technologies—we aim to explore new applications of pattern recognition in the marine and fisheries domains.

Related Projects

Industrial Applications of Image Recognition Technology

Industrial Applications of Image Recognition Technology

We conduct research on the application of image recognition technologies across various industrial fields, including manufacturing, construction, agriculture, retail, and broadcasting.

Research Overview

Our research focuses on applying image recognition technologies to a wide range of industries, such as manufacturing, construction, agriculture, retail, and broadcasting.

Many of these research themes are carried out as part of collaborative projects with industry partners.

Join Our Laboratory

We are always looking for passionate students and collaborators.

View Details