Video Collages With Interesting Moments

Posted in: Media Optimization

Join thousands of marketers to get the best search news in under 5 minutes. Get resources, tips and more with The Splash newsletter:

Photo Collages and Video Collages

We may see video collages in hardware associated with Google that generates videos. Google photos have had a collage feature, and I can visit and seen collages of pictures from the exact locations all joined together. There is a way of tagging “key moments” from videos using schema markup so that search results in Google can point to key moments from videos (highly recommended). A recent Google patent describes making video collages and refers to “interesting moments” in those videos. It doesn’t tell us the difference between a key moment in one video and interesting moments in video collages of multiple videos.

Related Content:

But it does describe why it might make video collages:

There are currently one billion smartphones in use. There is potential for seven times the amount of growth in the future. Smartphones get used for capturing and consuming content, like photos and videos. Videos convey more than photos because they capture temporal variation. But, people may be less likely to view videos because not all parts of the video are interesting.

The background description of this patent presents the context of this patent.

Generating Video Collages

This patent refers to interesting moments in videos as opposed to key moments in videos. There are many help Pages about marking up key moments in videos, but none saying that they point to interesting moments. But they do point to moments that are designated as interesting by the people who post those videos. The Video Collages Patent does layout a framework describing how video collages might get built, filled with interesting moments.

Using Schema To Tag Key Moments in Videos in Search Results

When I came across this patent, I got reminded of the Google Developer’s post on implementing SeektoAction markup: A new way to enable video key moments in search. In brief, it works like this:

Today, we’re launching a new way for you to enable key moments for videos on your site without the effort of manually labeling each segment. All you have to do is tell Google the URL pattern for skipping to a specific timestamp within your video. Google will then use AI to identify key moments in the video and display links directly to those moments in Search results.

I also got reminded of people asking me questions about “key moments” found on Youtube Videos. There is a Google Blog post on this topic: Search helps you find key moments in videos What that quickly tells us is that:

Starting today, you can find key moments within videos and get to the information you’re looking for faster, with help from content creators.

When you search for things like how-to videos with multiple steps, or long videos like speeches or a documentary, the search will provide links to key moments within the video, based on timestamps provided by content creators.

You’ll easily scan to see whether a video has what you’re looking for and find the relevant section of the content.

For people who use screen readers, this change also makes video content more accessible.

This Google Developer’s Page tells us about those timestamps: Get videos on Google with schema markup

Implementations of the patent relate to a computer-implemented method to generate a collage. The method includes determining exciting moments in a video. The method further comprises generating video segments based on the exciting moments, where each of the video segments has at least one of the exciting moments from the video. The method further includes generating a collage from the video segments, where the collage comprises at least two windows, and each window contains one of the video segments.

I also came across a Search Engine Land Article on Key Moments in Videos, which tells us that: Google officially launches SeekToAction for key moments for videos in search

I also found this support page on Youtube about audience retention: Measure key moments for audience retention

Key Moments in Videos May be similar to Interesting Moments in Video Collages

The patent provides a lot of information about interesting moments.

Operations of the video collages patent further include receiving a selection of the video segments in the collage and causing the video to get displayed that corresponds to the selection.

Determining the interesting moments in a video includes:

  • Identifying audio in the video
  • Citing a type of action associated with the audio in the video
  • Generating an interest score for each type of audio in the video
  • Determining the interesting moments based on the interest score for each type of audio in the video
  • Deciding on the interesting moments in the video includes:
  • Noting motion in the video
  • Finding type of action associated with the continual motion in the video
  • Creating an interest score for each type of action in the video
  • Locating the interesting moments based on the interest score for each type of action in the video

The video segments in the collage get configured to play automatically. At least a first segment of the video segments in the collage gets configured to play at a different frame rate than other video segments in the collage.

Piecing together the video collages from the video segments includes generating graphical data that renders the collage with video segments in windows of different sizes. The windows may get based on the interest scores for the video segments, the length of each video segment, and an artistic effect.

Making Video Collages of Interesting Moments

A computer-implemented method to generate a hierarchical collage includes:

  • Finding interesting moments in a video
  • Including video segments based on interesting moments
  • Grouping the video segments into groups
  • Making first collages, each corresponding to a respective one of the groups and each of the first collages including at least two video segments
  • Selecting a representative segment for each of the groups from the at least two video segments of each of the two or more first collages
  • Showing a second collage that includes the representative segment for each of the groups, wherein the representative segment in the second collage links to a corresponding first collage that includes at least two video segments that get included in a corresponding group
  • Choosing a selection of the representative segments in the second collage and causing the corresponding first collage to become displayed
  • Gatherng the video segments into groups is based on the timing of each of the video segments or grouping the video segments into groups gets based on a type of interesting moment associated with each of the video segments
  • Deriving an interest score for the interesting moments and selecting the representative for each of the groups
    may get based on the interest score

A method comprises means for:

  • Determining interesting moments in a video
  • Generating video segments based on the interesting moments, wherein each of the video segments includes at least one of the interesting moments from the video
  • Creating a collage from the video segments, wherein the collage includes at least two windows and wherein each window includes one of the video segments

The system and methods described below solve the problem of identifying exciting moments in a video by generating a collage that includes video segments of the exciting moments.

The Video Collages of Interesting Moments Patent

The Video Collages patent is found at:

Collage of interesting moments in a video
Inventors: Sharadh Ramaswamy, Matthias Grundmann, and Kenneth Conley
Assignee: Google LLC
US Patent: 11,120,835
Granted: September 14, 2021
Filed: December 17, 2018

Abstract

A computer-implemented method includes determining interesting moments in a video. The method further includes generating video segments based on the interesting moments, wherein each of the segments includes at least one of the interesting moments from the video. The method further includes generating a collage from the video segments, where the collage includes at least two windows and wherein each window includes one of the video segments.

The patent tells us that searchers are more likely to view a video if they can preview interesting moments in videos and navigate directly to those exciting moments in the video.

A video application is described here:

  • Finds interesting moments in a video
  • Builds video segments based on the interesting moments
  • Makes a collage from the video segments that include the video segments in a single pane

For example, a video may have a first video segment of a child laughing, a second video segment of a dog running after the child, and a third video segment of the child blowing out a birthday cake.

How Video Collages are Generated

The video application may generate video collages that display short, e.g., two to three seconds long, loops of the first, second, and third video segments. The frame rates of each of the video segments may differ. For example, the first video segment may include a slow-motion video, the second video segment may consist of a fast-motion video, and the third video segment may include a regular-speed video segment.

When a user selects one of the video segments in the collage, the application may cause the video to get displayed that corresponds to the selected part. For example, if the first video segment occurs at 2:03 minutes, user selection causes the video to play at 2:03 minutes.

The video application may generate a hierarchical collage. The video application may determine exciting moments in a video. It might then create video segments based on the exciting moments.
It could group the video segments into groups and generate first collages based on the groups. It could then select a representative piece for each group and generate a second collage that includes a usual segment for each group.

The groups may become based on timing or a type of interesting moment associated with each video segment. Continuing with the example above, a first group could include a first video segment of a child laughing, a second video segment of a dog running after the child, and a third video segment of the child blowing out a birthday cake that all occur in the first third of the video.

This video application may also generate an interest score for each video segment and select the representative segment based on the interest score. For example, the third video segment of the child blowing out the birthday cake may have an interest score indicative of the most interesting video segment. As a result, the video application may select the third segment as the representative segment for the first group in the first collage.

When a user selects one of the usual segments in the second collage, the video application may cause the first collage to get displayed.

An Example Application That Generates Video Collages

This patent is about an application that includes a video server, user devices, a second server, and a network. It looks like it could generate video collages with a variety of hardware devices, and may have been purposefully left wide open for undeveloped hardware.

Users may become associated with respective user devices. The method may include other servers or devices.

The entities of the system get coupled via a network. The network may be conventional: wired or wireless, and may have many different configurations, including a star configuration, token ring configuration, or other configurations. Furthermore, the network may include a local area network (LAN), a vast area network (WAN) (e.g., the Internet), and other interconnected data paths across which many devices may communicate.

The database may store videos created or uploaded by users associated with user devices and collages generated from the videos.

The database may store videos developed independently of the user’s devices.

The database may also store social network data associated with users.

The user device may be a computer with a memory and a hardware processor, such as a camera, a laptop computer, a desktop computer, a tablet computer, a mobile telephone, a wearable device, a head-mounted display. The hardware processor could also be a mobile e-mail device, a portable game player, a portable music player, a reader device, a television with processors embedded therein or coupled to it, or another electronic device capable of accessing a network.

The user device gets coupled to the network via a signal line. A signal line may be a wired connection, such as Ethernet, coaxial cable, fiber-optic cable, etc., or a wireless connection, such as Wi-Fi.RTM., Bluetooth.RTM., or other wireless technology. User devices get accessed by Users, respectively.

Examples of User Devices Used to Create Video Collages

The user device can be a mobile device that gets included in a wearable device worn by the user. For example, the user device gets included as part of a clip (e.g., a wristband), part of jewelry, or part of a pair of glasses. In another example, the user device can be a smartwatch. The user may view images from the video application on a display of the device worn by the user. For example, the user may view the pictures on a smartwatch or a smart wristband display.

The video application may be a standalone application that gets stored on the user’s device. The video application may get stored in part on the user device and the video server. For example, the video application may include a thin-client video application stored on the user devicea and a video application stored on the video server.

The video applicationb stored on the user device may record video transmitted to the video application stored on the video server. A collage gets generated from the video. The video application may send the collage to the video application for display on the user device. In another example, the video application stored on the user devicea may generate the collage and send the collage to the video application stored on the video server. The video application stored on the video server may include the same components or different components as the video application stored on the user device.

The video application may be a standalone application stored on the video server. A user may access the video application via a web page using a browser or other software on the user’s device. For example, the users may upload a video stored on the device or from the second server to the video application to generate a collage.

The second server may include a processor, a memory, and network communication capabilities. The second server is a hardware server. The second server sends and receives data to and from the video server and the user devices via the network.

The second server may provide data to the video application. For example, the second server may be a separate server that generates videos used by the video application to create collages. In another example, the second server may be a social network server that maintains a social network where the collages may get shared by a user with other social network users. In yet another example, the second server may include video processing software that analyzes videos to identify objects, faces, events, a type of action, text, etc. The second server may get associated with the same company that maintains the video server or a different company.

Video Collages with Entity Information Attached

As long as a user consents to use such data, the second server may provide the video application with profile information or images that the video application may use to identify a person in a photo with a corresponding social network profile. In another example, the second server may provide the video application with information related to entities identified in the images used by the video application.

For example, the second server may include an electronic encyclopedia that provides information about landmarks identified in the photos. This electronic shopping website provides information for purchasing entities identified in the images. This electronic calendar application offers, subject to user consent, an event name associated with a video, a map application that provides information about a location associated with a video, etc.

The systems and methods discussed herein collect, store, and use user personal information only upon receiving explicit authorization from the relevant users. For example, a user controls whether programs or features collect user information about that particular user or other users apply to the program or part. Users hold the information pertinent to that user and whether the information gets managed and which gets collected.

For example, users can get provided with control options. Specific data may get treated in ways before it gets stored or used to remove personally identifiable information. For example, a user’s identity may get treated to determine no personally identifiable information. As another example, a user’s geographic location may get generalized to a larger region so that the user’s particular location cannot get determined.

An Example Computer That Generates Video Collages

The computer may be a video server or a user device.

The computer may include a processor, a memory, a communication unit, a display, and a storage device.

A video application may get stored in the memory.

The video application includes a video processing module, a segmentation module, a collage module, and a user interface module. Other modules and configurations are possible.

The video processing module may be operable to determine exciting moments in a video. The video processing module may be a set of instructions executable by the processor to decide exciting moments in the video. The video processing module may get stored in the computer’s memory and accessible and executable by the processor.

The video processing module may get stored on a device that is the video server. The video processing module may receive the video from the video application stored on the user device. The video processing module may receive the video from a second server, which stores movies or television shows.

The video processing module determines exciting moments in the video associated with a user. The video processing module may identify the exciting moments and choose the interesting moments based on the label. For example, the user interface module may generate a user interface that includes an option for the user to select frames, for example, by clicking on the shelves in the video to identify interesting moments. The video processing module may associate metadata with the video that includes time locations for the interesting moments placed by the user. The video processing module may receive a sign of what forms an interesting moment from a user. For example, the user may specify that interesting moments include people in the video saying a particular phrase or speaking on a specific topic.

Video Processing Finding Interesting Moments

The video processing module determines interesting moments by identifying audio in the video. The video processing module may determine the type of audio in the video. For example, the video processing module may classify the audio associated with music, applause, laughter, booing, etc. The video processing module may determine the level of volume of the audio. For example, in a basketball game video, an increase in the audio from cheering and booing may get associated with an interesting moment, such as a basketball player missing a shot.

The video processing module may generate an interest score for each type of motion based on the type of audio. For example, the video processing module may develop an interest score that indicates that the moment is interesting based on the start of music or laughter. The video processing module may generate an interest score that means the moment is not interesting based on a cough or general background noise. The video processing module may determine the interesting moment based on the interest score for each type of audio in the video.

The video processing module determines interesting moments by identifying continual motion in the video and identifying a type of action associated with the constant movement in the video. The video processing module may determine activity by classifying pixels in an image frame as background or foreground.

The video processing module may classify all image frames or a subset of image frames of the video.

The video processing module identifies the background and the foreground in a subset of the image frames based on the timing of the image frames. The subset may include a few or all of the intra-coded structures (I-frames) of the video. For example, the video processing module may perform classification on every third frame in the video. In another example, the video processing module may perform a sort on a subset of the frames in the video, e.g., only I-frames, I-frames, and a few or all predicted picture frames (P-frames), etc.

Comparing Foreground Motion in Video Segments

That video processing module may compare the foreground in many video image frames to identify foreground motion. For example, the video processing module may use different techniques to identify activity in the foreground, such as frame differencing, adaptive median filtering, and background subtraction. This process advantageously identifies the motion of objects in the foreground. For example, in a video of a person doing a cartwheel outside, the video processing module may ignore movement in the background, such as swaying the trees in the wind. Still, the video processing module identifies the person performing the cartwheel because the person is in the foreground.

And, the video processing module may analyze the video to determine the action associated with the continual motion. For example, the video processing module may use a vector-based on continual movement to compare the constant motion with continual motion in available videos. The video processing module may use the vector t, identify a person walking a dog, punching another person, catching a fish, etc. In another example, the video processing module may perform image recognition to identify objects and types of motion associated with the things in other past videos to identify the action.

For example, the video processing module identifies a trampoline. It determines that a person is jumping on the trampoline based on trampolines becoming associated with jumping, a cake becoming associated with cutting or blowing out a birthday cake, skis becoming associated with skiing, etc. The video processing module may associate metadata with the video that includes timestamps of each action type. For example, the video processing module may generate metadata that identifies a timestamp of each instance of a person riding a scooter in the video.

Interesting Moments Based on Continual Motion in Videos

Also, the video processing module may determine an interesting moment based on the action associated with the continual motion. For example, the video processing module may determine that a video includes a user riding a skateboard. The video processing module generates an interest score based on the type of action. The video processing module may develop an interest score that corresponds to the act of skateboarding. The video processing module may assign the interest score based on the quality of the action. For example, the video processing module may give an interest score that indicates a more interesting moment when the frames with the movement show:

  • A person with a visible face
  • Edges where the quality of the images is high

These would get based on the visibility of the action, lighting, blur, stability of the video.

On user consent, the video processing module may generate the interest score based on user preferences. For example, if a user has expressed an interest in skateboarding, the video processing module generates an interest score that indicates that the user finds skateboarding to be enjoyable. The user provides explicit interests that the video processing module adds to a user profile associated with the user. When the user provides consent to the analysis of implicit behavior, the video processing module determines types of actions to add to the user profile based on implicit behavior, such as providing indications of approval for media associated with types of activities.

Object Recognition on Objects in Video Collages

The video processing module may perform object recognition to identify objects in the video. Upon user consent, the video processing module may perform object recognition that includes identifying a face in the video and determining an identity of the face. The video processing module may compare an image frame of the face to images of people, reach the image frame to other members that use the video application, etc. Upon user consent, the video processing module may request identifying information from the second server.

For example, the second server may maintain a social network. The video processing module may request profile images or other social network users connected to the user associated with the video. Upon user consent, the video processing module may use facial recognition techniques to people in image frames of the video to identify people related to the faces.

The video processing module may generate metadata that includes identifying the objects and timestamps of when the things appear in the video. For example, the metadata may consist of labels that identify a type of object or person. If the user has provided consent, the video processing module may generate metadata that includes identifying people and timestamps when the people appear in the video. For example, for a video of the user’s daughter, the video processing module may generate metadata that identifies each time the daughter appears in the video and timestamps and identifies objects that the daughter interacts with within the video.

The video processing module generates an interest score to identify a type of object or a person in the video. The video processing module may compare a variety of objects to a list of positive things and a list of harmful objects that include objects that get commonly recognized as being positive and negative, respectively.

When the user consents to user data, the video processing module assigns the interest score based on personalization information for a user associated with the video. For example, upon user consent, the video processing module maintains a social graph and generates the interest score based on a relationship between the user and a person in the video as identified using the social graph.

Personalozation and User’s Reaactions to Video

The video processing module may determine personalization information, subject to user consent, based on detailed data provided by the user, implicit information found on the user’s reactions to videos, such as comments provided on video websites, activity in social network applications, etc. The video processing module determines user preferences based on the types of videos associated with the user. For example, the video processing module may determine that the user prefers videos about sports based on the user creating or watching videos that include different types of sports, such as baseball, basketball, etc.

The video processing module may determine an event associated with the video. The video processing module may determine the event based on metadata associated with the video. For example, the metadata may include a date and a location associated with the video. The video processing module may use the date and the location to retrieve information, for example, from a second server, about what event occurred at that date and time. When the user provides consent to metadata, the video processing module may use metadata that identifies objects and people in the video to determine the event.

For example, the video processing module may determine that the event was a concert based on identifying crowds of people in the video. Particular objects may get associated with specific circumstances. For example, cakes get associated with birthdays and weddings. Basketball gets associated with a court, etc. In another example, people may get related to events, such as people wearing uniforms with specific circumstances during school hours, people sitting in pews with a church gathering, people around a table with plates with dinner, etc. The video processing module may generate an exciting score based on the type of event identified in the video.

The video processing module may use more sources of data to identify the event. For example, the video processing module may determine the date, the time, and the location where the video got taken based on metadata associated with the video and, upon user consent, request event information associated with the data and the time from a calendar application associated with the user. The video processing module may request the event information from a second server that manages the calendar application.

Events From Videos Determined From Publicly Available Information

The video processing module may determine the event from publicly available information. For example, the video processing module may use the date, the time, and the location associated with the video to determine that the video is from a football game. The video processing module may associate metadata with the video that includes identifying information for the event.

The video processing module may transcribe the audio to text and identify an interesting moment based on the reader. The video processing module may generate metadata that identifies a timestamp for each instance where a user spoke a specific word. For example, where the video is from speeches given at a conference on cloud computing, the video processing module may identify a timestamp for each location where a speaker said “the future.” The video processing module may use the audio as a sign of an interesting moment. For example, for sports events or other competitions, the video processing module may identify when a crowd starts cheering and determine continual motion that occurred right before the cheering, including an interesting moment.

The video processing module may determine whether the interest score meets or exceeds a threshold segmentation value. Suppose a part of the video includes an interest score that meets or exceeds the threshold segmentation value. In that case, the video processing module may instruct the segmentation module to generate a video segment that consists of the interesting moment. Portions of the video that fail to meet or exceed the threshold segmentation value may not get identified as including an interesting moment.

More on Interest Scores From POtential Video Segments

The video processing module may apply interest scores on a scale, such as from 1 to 10. The interest score may get based on a combination of factors identified in the partn of the video. For example, the video processing module may generate an interest score based on the part of the video, including an event, an object, and a person.

The video processing module may receive feedback from a user and change the user profile to modify the interest score accordingly. For example, if a user provides a sign of approval (e.g., a thumbs up, a +1, a like, saving a collage to the user’s media library, etc.) of a collage that includes a video on new types of wearables, the video processing module may add wearables in a list of positive objects.

In another example, the user may explicitly state that the user enjoys collages where the event type is a rock show. The video processing module may update personalization information associated with the user, such as a user profile, to include the rock show as a preferred event type. The feedback consists of an indication of disapproval (a thumbs down, a -1, a dislike, etc.). The expressions of approval and disapproval get determined based on comments provided by a user. The feedback includes identifying a person, an object, or a type of event that someone wants to get included in the collage.

The segmentation module may be operable to segment the video into video segments based on interesting moments. This segmentation module may be a set of instructions executable by the processor to feature the video. It may get stored in the computer’s memory and can be accessible and executable by the processor.

Segmentation to Find Interesting Moments For Video Collages

And, the segmentation module generates video segments that include interesting moments. Where the interesting moment is associated with continual motion, the segmentation module may create a video segment with a beginning and an end. The segmentation module may identify a start and an intermediate endpoint of continual motion within the piece and pick a sub-segment that includes both these points. For example, if the video is of a girl doing many cartwheels, the start point may be the start of a first cartwheel, and the intermediate endpoint may be the end of the first cartwheel. In another example, the segmentation module may identify a segment based on different types of motion.

For example, a first sub-segment maybe a cartwheel, and a second subsegment may be a jumping celebration. Next, may determine how to generate the segment by including at least a particular number of interesting moments. For example, the segmentation module may create a video segment with a first interesting moment with a specific object in the first frames. It may show a second interesting moment with continual motion in a group of double frames and a third interesting moment that includes a person in a third frame. Also, the segmentation module may generate a video segment that is one to three seconds long.

The segmentation module may generate a video segment that includes many frames at different periods in the video. For example, the segmentation module may create a video segment that provides for many instances where people at a conference say “cloud computing” at different periods in the video.

The segmentation module generates video segments based on a theme. When a user specifies that interesting moments include a type of action, the segmentation module generates a video segment that consists of the interesting moments identified by the video processing module. For example, the segmentation module may show a video segment where a person rides a scooter in the video. The segmentation module may select many action instances to include in the video segment based on the interesting scores.

Ranking Interesting Moments To Choose For Video Collages

The segmentation module may rank the interesting moments based on their corresponding interesting scores and select many of the interesting moments based on the length of the video segment, such as three seconds, five seconds, twenty seconds, etc. For example, the segmentation module may select the top five most interesting moments based on the ranking because the total length of the five most interesting moments is under 20 seconds.

The segmentation module may determine markers that state different sections within the video and generate segments that include interesting moments within the units.

The sections may include:

  • Different acts or scenes in a movie
  • Different news segments in a news reporting show
  • Different videos in a show about people filming dangerous stunts on video
  • Etc.

For example, the segmentation module may generate three video segments for a movie. The three segments represent the three acts in the film, and each segment includes interesting moments cut from the corresponding act. The markers may consist of metadata stating each section’s start and end, black frames, white frames, a title card, a chapter card, etc.

The segmentation module verifies that the video segments are different from each other. For example, the segmentation module may determine that each video segment includes different objects, so the collage does not include video segments that look too similar.

The collage module may be operable to generate a collage from the video segments. The collage module can be a set of instructions executable by the processor to provide the functionality described below for generating the collage. The collage module can become stored in the computer’s memory and accessible and executable by the processor.

The collage module receives video segments from the segmentation module. The collage module may retrieve the selected video segments from the storage device.

Generating Video Collages From Video Segments

The collage module may generate a collage from the video segments where the video segments get displayed in a single pane. The video collages may take many forms. For example, the collage module may generate video collages when at least two video segments are available. In another example, the collage module may create video collages when at least four video segments are available. The video segments may be displayed in square windows, in portrait windows (e.g., if the video segment gets shot in portrait mode), in a landscape window (e.g., if the video gets shot in landscape mode), and with different aspect ratios (e.g., 16:9, 4:3, etc.).

The collage module may configure the aspect ratios and orientations based on the user device used to view the collage. For example, the collage module may use a 16:9 aspect ratio for high-definition televisions, a 1:1 aspect ratio for square displays or viewing areas, a portrait collage for a user device in a portrait orientation, and a vast collage (e.g., 100:9) for wearables such as augmented reality and virtual reality displays.

The collage module may combine a predetermined number of video segments to form the collage. For example, the collage module may rank the video segments from most attractive to least interesting based on the interest scores and generate a collage based on the predetermined number of video segments that are the most interesting. The collage module may select video segments with interest scores that meet or exceed a predetermined collage value.

The collage module processes the video segments. For example, the collage module may convert the video segments to high dynamic range (HDR), black and white, sepia, etc.

The Layout and Ordering of Video Segments Based O Chronology

The collage module may layout and order the video segments based on chronology, interest scores, visual similarity, color similarity, and the length of time of each piece. Ordering the collage based on chronology may include the first video segment corresponding to the earliest time, the second video segment corresponding to the earliest time, etc. The collage module may order the video segments based on the interest scores by ranking the video segments from most attractive to least interesting based on the interest scores and order the collage based on the ranking. The collage module may arrange the video segments in a clockwise direction, counterclockwise guidance, or an arbitrary direction. Other configurations are possible.

The collage module generates instructions for the user interface module to create graphical data that renders the collage with video segments in windows of different sizes. The size of the windows may get based on interest scores for each of the video segments. For example, the video segment with an interest score that indicates that it is most interesting may have the largest window size.

Additionally, the size of the windows may get based on the length of the video segments. For example, the shortest video segment may correspond to the smallest window size. The collage module may determine window size based on an artistic effect. For example, the collage module may generate windows that resemble creative works from the De Stijl art movement. In particular, the collage module may create a collage with shapes that resemble a Piet Mondrian painting with different sized boxes and different line thicknesses that distinguish the separation between different video segments.

The collage module generates a collage that is a video file (e.g., an animated GIF, an MPG, etc.) with associated code (e.g., JavaScript) that recognizes user selection (e.g., to move to the second collage in a hierarchy, to playback a specific segment, etc.). The collage module may link the video segments to a location in the video. Upon selecting one of the video segments, the video gets displayed in the video that corresponds to the piece. For example, each video segment in the collage may include a hyperlink to the corresponding location in the video.

Generating Video Collages by Meeting a Threshold Score

The collage module generates and displays a collage by determining video segments that meet a threshold score. It may evaluate display characteristics for the collage and identify window layouts that meet the display characteristics. It can also select a particular window layout, generate the collage, and cause the collage to get displayed.

A graphic representation gets illustrated. The graphical representation includes an example timeline of a video and a corresponding collage 310 generated from four interesting moments. The timeline represents an eight-minute video. The eight-minute video may be an ice skating competition where four different ice skating couples each have a two-minute demonstration. The video processing module identified four interesting moments labeled A, B, C, and D in this example.

The segmentation module generates four video segments where each video segment includes a corresponding interesting moment.

Interesting moment A may include a first couple executing a sustained edge step.

The interesting moment B may consist of a second couple where one of the skaters runs a triple axel jump.

The interesting moment C may include a third couple executing the sustained edge step.

And the interesting moment D may consist of a fourth couple executing a serpentine step sequence.

The video processing module may determine the interesting moments based on a user identifying the interesting moments, identifying continual motion, for example, a motion that occurs before the crowd starts cheering, or another technique.

The collage module generates a collage from the video segments. In this example, the collage module generates a collage that orders the video segments chronologically in a clockwise direction. Suppose a user selects one of the video segments.

The user interface module may cause the video to get displayed at the location in the video that corresponds to the time of the video segment.

For example, in the example depicted, if a user selects video segment D, a new window may appear that displays the video at the D location illustrated on the timeline near the end of the video.

A Graphic Representation of Another Example Video Collage

In this example, the collage includes 19 video segments. The collage module may generate the different sized windows for the collage based on the interest scores for each video segment and the length of the video segments. For example, a figure may represent a collage generated from a video of a news program. Video segment A may represent the feature news story for the news program, which is both the most interesting and the longest. As a result, video segment A gets described with the largest window. Video segments B, C, and H, represent other less interesting and shorter news segments. Lastly, video segments D, E, F, and G represent short snippets in the news program.

The collage module generates a hierarchical collage. Hierarchical collages may be helpful to, for example, present a limited number of video segments in a single window. Besides, the hierarchical collage may create an entertaining effect that helps users stay more engaged when so many video segments appear too crowded. The collage module may group the video segments based on the timing of the video segments or a type of interesting moment associated with the video segments.

The collage module may generate the first collages based on the groups. For example, the collage module may divide a video into three parts and develop the first collages for each video segment in the first, second, and last. In another example, a video may include tryouts and competitions. The collage module may group based on the type of interesting moment by distinguishing between tryouts and competitions.

The collage module may generate two first collages, one first collage for the video segments in the tryouts and one second for the video segments in the competitions. The representative segment may be the most extended video segment for a group. The representative segment may be a segment that includes a high amount of continual motion compared with other elements in the group. A combination of interest score, segment length, amount of continual movement, etc., may get used to select the representative segment.

The collage module may select a representative segment from the video segments associated with the first collages. The usual component may get based on the interest score for each of the video segments in the group. For example, continuing with the above example of a group of tryouts and a group of competitions, the collage module may select the most interesting tryout video segment to represent the tryout group’s representative segment.

The collage module may generate a second collage that includes the representative segment for each of the groups. The standard components link to each of the corresponding first collages such that the selection of one of the usual segments causes the related first collage to be visible. The collage module may instruct the user interface module to generate graphical data that drives the second collage to open to display the corresponding first collage, replace the second collage with the first collage, or to causes all the first collages to get displayed.

The collage module configures the video segments in the collage to play automatically. Or additionally, the collages may have to get selected to play. The video segments may play at once or sequentially such that a first video segment plays, then a second video segment plays, etc. The video segments may play once or become configured to play on a continuous loop. A user may be able to configure automatic playback or other options as system settings.

The collage module configures the video segments to play at different frame rates. For example, video segment A may play at the standard speed of 24 FPS (frames per second), video segment B may play at a slower pace of 16 FPS, video segment C may play at a faster speed of 50 FPS, and video segment D may play at 24 FPS. The collage module selects the frame rate based on the content of the video segment. For example, the collage module may determine a slow frame rate for video segments when the rate of continual motion in the video segment is high, such as a video segment of a pitcher throwing a baseball. The collage module may select a faster frame rate when the rate of continual motion in part is low, such as a video segment of a person blowing out a candle or cutting a cake.

An Example Timeline And Hierarchical Video Collages

For example, the timeline represents a video of a meeting that includes presenters giving talks, attendees forming discussion groups, and closing remarks becoming presented. The collage module groups the video segments into three groups: group A represents a section where presenters talk, group B represents a section where people form discussion groups, and group C describes closing remarks.

The collage module generates two first collages: one for group A, which includes four video segments, and one for group B, which provides three video segments. The collage module generates a second collage that includes representative details for the two first collages and the video segment for group C. The second collage may consist of a usual component from each group’s A, B, and C.

Suppose a user selects the representative segment for group A. In that case, the user interface module causes a user interface to display the first collage for group A, which includes video segments A1, A2, A3, and A4. If the user selects video segment A3, it causes the user interface to display the video at the location corresponding to A3 in the timeline.

The user interface module may be operable to provide information to a user. That user interface module can be a set of instructions executable by the processor to provide the functionality described below for providing information to a user. The user interface module can get stored in the computer’s memory and accessible and executable by the processor.

The user interface module may receive instructions from the other modules in the video application to generate graphical data operable to display a user interface. For example, the user interface module may create a user interface that displays a collage created by the collage module.

The user interface module may generate graphical data to display collages that link to the full video. Responses to a user clicking on the collage the user interface may display the original video or cause a new webpage to open that includes the full video. The user interface module provides an option to download the collage to a user device or stream the collage from the video server.

The user interface module may generate an option for a user to provide feedback on the collages. For example, the user interface module may create a user interface that includes a feedback button that the user can select to view a drop-down menu that contains objects that the user wants to add as explicit interests. The user interface module may provide the things based on labels associated with the video segments used to create the list of objects that the user may select as explicit interests.

A Graphic Representation of A User Interface That Includes A Vdeos Section

In the videos section, the user interface module may receive a designation of an interesting moment from a user. In this example, the user interface module includes instructions that inform users that the user can identify interesting moments by clicking on the video. As a result of the user selection, the video segment module generates a segment that includes the interesting moment. The collage module generates a collage that consists of the video segments.

A Figure also includes a collages section that consists of a collage. In this example, the user selects one of the playback buttons to view a corresponding video segment. The user interface also includes an option for indicating approval of the video in a +1 button and a share button that allows the user to share the collage. For example, the user interface module may generate an option for sharing the collage via a social network, using e-mail, via a chat application, etc.

An Example Method To Generate A Video Collage

Interesting moments get determined in a video. For example, a user identifies the interesting moments, selected based on continual motion, objects in the video, etc. Video segments get generated based on the interesting moments, where each of the video segments includes at least one of the interesting moments from the video. A collage gets generated from the video segments, where the collage consists of at least two windows, and each window includes one of the video segments.

Generating Hierarchical Video Collage

The steps may get performed by the video application.

Video collages get created based on interesting moments.

Interesting moments get determined in a video.

The video segments get grouped into groups.

Two or more first video collages get generated, each corresponding to one of the two or more groups. Each of the first video collages include at least two video segments. A representative component gets selected for each group from at least two video segments of each of the first collages. A second collage gets generated that includes the usual segment for each group. The second collage links to a corresponding first collage that provides at least two video segments in a related group.

Search News Straight To Your Inbox

This field is for validation purposes and should be left unchanged.

*Required

Join thousands of marketers to get the best search news in under 5 minutes. Get resources, tips and more with The Splash newsletter: