Fish Census


Project abstract

Today's acquisition systems make it easy to record hours of video. However, in the majority of cases, these videos cannot be automatically analyzed due to the complexity of the scene observed. This is particularly true of videos of underwater wildlife. Species sampling therefore requires the intervention of a human expert who has to view the entire video, or even replay it several times for certain complex scenes. The study of 10 minutes of video therefore requires between 15 minutes and 2h30, depending on the richness of the fauna present on the video. These analyses require a relatively long period of expert monopolization, for a job that is extremely important but not very rewarding in itself. What's more, these observation and counting tasks require a high level of concentration, which can quickly become exhausting. To simplify and automate these observation and identification phases as much as possible, we have developed algorithms that automatically analyze videos in order to: 1/ Locate relevant sequences within a video. In other words, define the sequences in which wildlife is actually present. 2/ Track the species present by identifying their trajectory during the sequence. In this study, we focused on sequences with a limited number of species present in the video. This automatic processing facilitates the work of the expert, who can then concentrate on the species identification phase.

Keywords

computer vision, background subtraction


  • Jean-Christophe Burie
  • Marie-Neige Chapel
  • Delphine Mallet
  • Axelle Pochet

My publications

currently writing 2 publications for this project.