Generating summary videos based on visual and sound information from movies

Yurina Imaji, Masaya Fujisawa

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Vast quantities of video data are now widely available and easily accessible; because of the many videos that users encounter, video summary technology is needed in order to help users find videos that match their preferences. This study focuses on movies to propose a method for extracting important scenes based on visual and sound information, and verifies the degree of harmony of the extracted scenes. The video segments thus characterized can be used to generate summary videos.

Original languageEnglish
Title of host publicationHuman Interface and the Management of Information
Subtitle of host publicationInformation and Knowledge Design - 17th International Conference, HCI International 2015, Proceedings
EditorsSakae Yamamoto
PublisherSpringer Verlag
Pages190-203
Number of pages14
ISBN (Print)9783319206110
DOIs
Publication statusPublished - 1 Jan 2015
Event17th International Conference on Human-Computer Interaction, HCI International 2015 - Los Angeles, United States
Duration: 2 Aug 20157 Aug 2015

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9172
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference17th International Conference on Human-Computer Interaction, HCI International 2015
Country/TerritoryUnited States
CityLos Angeles
Period2/08/157/08/15

Keywords

  • Summary videos
  • Visual and sound information

Fingerprint

Dive into the research topics of 'Generating summary videos based on visual and sound information from movies'. Together they form a unique fingerprint.

Cite this