Subscribe: Untitled
Added By: Feedage Forager Feedage Grade B rated
Language: English
capturing device  content elements  content  data  device  media  objects  system  time  timed data  video capturing  video 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Untitled




Thu, 03 Nov 2016 08:00:00 EDT

Method and system for producing relevance sorted video summary are provided herein. The method may include: obtaining a source video containing a plurality of source objects; receiving features descriptive of at least some of the source objects; clustering the source objects into clusters, each cluster including source objects that are similar in respect to one of the features or a combination of the features; obtaining relevance level of the clustered source objects, respectively; generating synopsis objects by sampling respective clustered source objects; and generating a synopsis video having an overall play time shorter than the overall play time of the source video by determining a play time for each of the synopsis objects based at least partially on the respective relevance level.


Thu, 03 Nov 2016 08:00:00 EDT

A media player configured with a first removable memory reader, such as a DVD drive, and a second removable memory reader, such as a flash memory reader, adapted to communicate with a removable memory containing filter data. The media player is configured to allow filtered playback of a multimedia presentation, such as a movie. Filtered playback causes certain portions of the multimedia presentation to be skipped, muted, blurred, cropped, or otherwise modified to eliminate or reduce potentially objectionable scenes, language, or other content. The second memory reader provides a convenient medium for the loading of filter information, whether data files, executable program code, or the like, to local memory of the media player to employ during filtered playback. Alternatively, the filters may be accessed from the removable storage media during playback rather than loading to local memory.

Methods and Systems for Managing a Local Digital Video Recording System

Thu, 03 Nov 2016 08:00:00 EDT

An exemplary web services provider system remote from and communicatively coupled to a local digital video recording (“DVR”) system by way of a network detects an input command provided by a user and representative of a request for the local DVR system to perform a DVR operation with respect to a media program provided by a television service, identifies, in response to the request, a status of the media program, determines, based on the identified status of the media program, an optimal manner in which to perform the DVR operation, and directs the local DVR system to perform the DVR operation in accordance with the optimal manner. Corresponding systems and methods are also described.

Grouping and Presenting Content

Thu, 03 Nov 2016 08:00:00 EDT

A provider transmits instructions to a receiver to record multiple instances of content. The instances of content are included in the same frequency band of a broadcast signal transmitted via a first communication link and encoded utilizing a common encryption. The provider determines to supplement and transmits an instruction to record a supplemental instance of content from a second content provider via a second communication link. The receiver receives the instructions and accordingly receives, decodes, and stores the multiple instances of content and the supplemental content. A recorder in communication with the receiver determines whether a content selection is a member of a content group. The group determination is based on a tag of the content selection. If the content selection is a member of the content group, the recorder presents to a display device the content selection and at least one other member of the content group.


Thu, 03 Nov 2016 08:00:00 EDT

Embodiments are disclosed for embedding calibration metadata for a stereoscopic video capturing device. The device captures a sequence of stereoscopic images by a plurality of image sensors and combines the captured sequence of stereoscopic images into a stereoscopic video sequence. The device further embeds calibration information into the stereoscopic video sequence in a real time as the sequence of stereoscopic images is being recorded. The calibration information can be used to correct distortion caused by hardware variances of individual video capturing devices. The corrected stereoscopic videos can be used to provide a virtual reality (VR) experience by immersing a user in a simulated environment.


Thu, 03 Nov 2016 08:00:00 EDT

An apparatus includes a recording unit configured to record a video image, a generation unit configured to generate, when a first type of event in which a state of an object changes is detected from the video image, an index associating the first type and a second type of event related to the first type in the video image, and a playback unit configured to play back, when the first type is specified, a video image concerning the second type corresponding to the specified first type based on the index.


Thu, 03 Nov 2016 08:00:00 EDT

The present invention relates to the field of video surveillance. Disclosed are a method and device for extracting surveillance record videos. In the present invention, the method for extracting surveillance record videos comprises the following steps: acquiring and storing lens viewsheds of cameras and an irradiation time period corresponding to each lens viewshed; extracting lens viewsheds corresponding to irradiation time periods that have intersection relationships with a query time period; calculating intersection relationships between the extracted lens viewsheds and a target location; obtaining a set of cameras corresponding to lens viewsheds that have intersection relationships with the target location; and extracting videos captured by the cameras according to irradiation time periods of the cameras in the camera set. Cameras relevant to the target can be found through intersection calculation performed on a designated target location in a designated time period of the user and selected camera viewsheds, so as to directly extract from the relevant cameras videos meeting conditions and useful for the practical application, thereby reducing the labor and time consumed in manual checking of video records.


Thu, 03 Nov 2016 08:00:00 EDT

A main stream contains successive content elements of video and/or audio information that encode video and/or audio information at a first data rate. A computation circuit (144) computes main fingerprints from the successive content elements. A reference stream is received having a second data rate lower than the first data rate. The reference stream defines a sequence of the reference fingerprints. A comparator unit (144) compares the main fingerprints with the reference fingerprints. The main stream is monitored for the presence of inserted content elements between original content elements, where the original content elements have main fingerprints that match successive reference fingerprints and the inserted content elements have main fingerprints that do not match reference fingerprints. Rendering of inserted content elements to be skipped. In an embodiment when more than one content element matches only one is rendered. In another embodiment matching is used to control zapping to or from the main stream. In another embodiment matching is used to control linking of separately received mark-up information such as subtitles to points in the main stream.


Thu, 03 Nov 2016 08:00:00 EDT

Automatically annotating a multimedia content at a base station includes (i) identifying an optimal pairing between a video capturing device and a base station, (ii) receiving, from a video sensor embedded in the video capturing device that captures a video associated with a user, a video sensor data based on the optimal pairing, (iii) receiving, from the video capturing device, a set of information associated with the video capturing device, (iv) synchronizing the video and the video sensor data to obtain a synchronized video content using a transmitted signal power from the video capturing device and a received signal power at the base station, and (v) annotating the synchronized video content with the set of information to obtain an annotated video content.

Method and system for segmenting videos

Thu, 03 Nov 2016 08:00:00 EDT

Techniques segmenting a video using tags without interfering video data thereof are disclosed. According to one aspect of the present invention, each tag is created to define a portion of the video, wherein the tags can be modified, edited, looped, reordered or restored to a create an impression other than that if the video was played back sequentially. The tags are so structured in a table included in a tagging file that can be shared or published electronically or modified or updated by others. Further the table may be modified to include one or more conditional or commercial tags.

Unified Processing of Multi-Format Timed Data

Thu, 03 Nov 2016 08:00:00 EDT

A timed data component is implemented within an operating system to provide parsing and data conversion of multiple timed data formats. The timed data component supports multiple formats of closed caption data and timed metadata, generating structured cue objects that include the data and timing information. Applications using proprietary or non-supported formats can pre-format the timed data as structured cue objects before sending the timed data to the timed data component. Structured cue objects output from the timed data component may be processed by a single text renderer to provide a consistent look and feel to closed caption data originating in any of multiple formats.


Thu, 03 Nov 2016 08:00:00 EDT

A sensor event detection and tagging system that analyzes data from multiple sensors to detect an event and to automatically select or generate tags for the event. Sensors may include for example a motion capture sensor and one or more additional sensors that measure values such as temperature, humidity, wind or elevation. Tags and event detection may be performed by a microprocessor associated with or integrated with the sensors, or by a computer that receives data from the microprocessor. Tags may represent for example activity types, players, performance levels, or scoring results. The system may analyze social media postings to confirm or augment event tags. Users may filter and analyze saved events based on the assigned tags. The system may create highlight and fail reels filtered by metrics and by tags.