Subscribe: Untitled
http://www.freepatentsonline.com/rssfeed/rssapp348.xml
Added By: Feedage Forager Feedage Grade C rated
Language: English
Tags:
audio  based  camera  data  device  display  imaging  includes  lens  light  method  object  system  unit  user  video 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Untitled

Untitled





 



CAMERA MODULE AND SYSTEM FOR PREVENTING OF DEW CONDENSATION IN CAMERA MODULE

Thu, 06 Apr 2017 08:00:00 EDT

The present disclosure relates to a camera module, and a dew condensation prevention system of the camera module, and there is disclosed a camera module including a lens assembly; a lens barrel in a cylindrical shape with a predetermined height configured to accommodate the lens assembly; and a heat radiation member formed along an outer surface of the lens barrel to transfer heat to an outermost lens of the lens assembly, wherein the lens assembly has a field-of-view region formed in a first direction and a second direction crossed with the first direction, and at least one of the first and the second field of view is above 180 degrees, and a dew condensation prevention system using the same.



Method and System for Monitoring Video With Single Path of Video and Multiple Paths of Audio

Thu, 06 Apr 2017 08:00:00 EDT

The present invention provides a method and system for video surveillance with a single channel of video and multiple channels of audio. The method comprises: a device end allocating a fixed initial SSRC value for each channel of audio; a client end and the device end establishing an RTSP interaction mode; the client end requesting, from the device end, a single channel of video and multiple channels of audio, the device end randomly generating, for each channel of audio, a corresponding modified SSRC value, and sending the same to the client end; the device end capturing the single channel of video and the multiple channels of audio and sending an RTP packet of the single channel of video to the client end, and after modifying the initial SSRC value in the RTP packet of each channel of audio to the corresponding modified SSRC value, sending the RTP packet of each channel of audio including the modified SSRC value to the client end; the client end distinguishing individual channels of audio according to the modified SSRC values in the RTP packets of the multiple channels of audio, and playing the video and/or the audio of a corresponding channel according to a user's demand. The present invention can implement audio-video capturing of multiple channels of audio and a single channel of video, and enable a user to select freely and play video and/or audio of a corresponding channel.



SYSTEM AND METHOD FOR VIDEO AND SECONDARY AUDIO SOURCE SYNCHRONIZATION

Thu, 06 Apr 2017 08:00:00 EDT

Disclosed herein are systems, methods, and computer-readable storage media for video and secondary audio source synchronization. A system practicing the method establishes a communication with a device such as a telephone or digital device and receives an identification of an audio stream associated with a video presentation specified by a user. The user interacts with the system using the dial pad on a telephone or by manipulating controls within a dedicated application on a smartphone or tablet device. The system computes a likely delay affecting most devices receiving the audio stream and delays a transmission of the audio stream to the device by the delay amount. In one embodiment, the system receives a delay adjustment signal from the device and adjusts the delay amount according to the signal. Then the system transmits the audio stream to the device by the new delay amount.



PROJECTION VIDEO DISPLAY

Thu, 06 Apr 2017 08:00:00 EDT

A projection video display that projects video image suppresses deterioration of its quality which is attributed to a change in an optical path of the video image. A video signal generator performs control such that a second subframe in an N-th frame in a left-eye image is displayed on DMDs and then a second subframe in an N-th frame in a right-eye image is displayed on the DMDs. Furthermore, the video signal generator performs control such that a displayed location is not changed on a screen and the same types of subframes are displayed at the time when frames are switched.



METHOD AND SYSTEM USING REFRACTIVE BEAM MAPPER TO REDUCE MOIRE INTERFERENCE IN A DISPLAY SYSTEM INCLUDING MULTIPLE DISPLAYS

Thu, 06 Apr 2017 08:00:00 EDT

A multi-display system (e.g., a display including multiple display panels) includes at least first and second displays (e.g., display panels or display layers) arranged substantially parallel to each other in order to display three-dimensional (3D) features to a viewer(s). An optical element(s) such as at least a refractive beam mapper (RBM) is utilized in order to reduce moire interference.



METHOD AND APPARATUS FOR INDIVIDUALIZED THREE DIMENSIONAL DISPLAY CALIBRATION

Thu, 06 Apr 2017 08:00:00 EDT

A target is outputted to an ideal position in 3D space. A viewer indicates the apparent position of the target, and the indication is sensed. An offset between the ideal and apparent positions is determined, and an adjustment determined from the offset such that the apparent position of the ideal position with the adjustment matches the ideal position without the adjustment. The adjustment is made to the first entity and/or a second entity, such that the entities appear to the viewer in the ideal position. The indication may be monocular with a separate indication for each eye, or binocular with a single viewer indication for both eyes. The indication also may serve as communication, such as a PIN input, so that calibration is transparent to the viewer. The method may be continuous, intermittent, or otherwise ongoing over time.



CALIBRATING A NEAR-EYE DISPLAY

Thu, 06 Apr 2017 08:00:00 EDT

Examples are disclosed herein that relate to calibrating a user's eye for a stereoscopic display. One example provides, on a head-mounted display device including a see-through display, a method of calibrating a stereoscopic display for a user's eyes, the method including for a first eye, receiving an indication of alignment of a user-controlled object with a first eye reference object viewable via the head-mounted display device from a perspective of the first eye, determining a first ray intersecting the user-controlled object and the first eye reference object from the perspective of the first eye, and determining a position of the first eye based on the first ray. The method further includes repeating such steps for a second eye, determining a position of the second eye based on a second ray, and calibrating the stereoscopic display based on the position of the first eye and the position of the second eye.



FLOATING IMAGE DISPLAY UNIT

Thu, 06 Apr 2017 08:00:00 EDT

A floating image display unit according to an embodiment of the technology includes an optical plate and one or a plurality of reflectors. The optical plate includes a plurality of optical elements arranged in a matrix on a substrate having a normal in a Z-axis direction, and each of the optical elements is configured to regularly reflect an entering light beam of a Z-axis direction component and recursively reflect an entering light beam of an XY-axis direction component. The one or the plurality of reflectors are configured to reflect light outputted from a light emitter or a light irradiation target object disposed on rear surface side of the optical plate, thereby causing the light to obliquely enter a rear surface of the optical plate.



SYSTEMS AND METHODS FOR MEDIATED-REALITY SURGICAL VISUALIZATION

Thu, 06 Apr 2017 08:00:00 EDT

The present technology relates generally to systems and methods for mediated-reality surgical visualization. A mediated-reality surgical visualization system includes an opaque, head-mounted display assembly comprising a frame configured to be mounted to a user's head, an image capture device coupled to the frame, and a display device coupled to the frame, the display device configured to display an image towards the user. A computing device in communication with the display device and the image capture device is configured to receive image data from the image capture device and present an image from the image data via the display device.



EYE GAZE RESPONSIVE VIRTUAL REALITY HEADSET

Thu, 06 Apr 2017 08:00:00 EDT

A headset for providing images to a user is provided. The headset is configured to attach to a user's head and includes displays positioned adjacent to a user's eyes. A display support arm is connected to the displays. A motor is connected to the display support arm and drives the display support arm between multiple positions. A camera is positioned adjacent to the user's eyes and records the user's eyes. A processor receives data from the camera regarding a position and movement of the user's eyes. The processor uses the data from the camera to drive the motor and to position the display support arm with respect to the user's eyes such that the displays maintain a position adjacent to the user's eyes.



METHOD AND SYSTEM FOR RECALIBRATING SENSING DEVICES WITHOUT FAMILIAR TARGETS

Thu, 06 Apr 2017 08:00:00 EDT

Methods and systems for recalibrating sensing devices without using familiar targets are provided herein. The method may include: capturing a scene using a sensing device; determining whether or not the device is calibrated; in a case that said sensing device is calibrated, adding at least one new landmark to the known landmarks; in a case that said sensing device is not calibrated, determining at least some of the captured objects as objects stored on a database as known landmarks at the scene whose position is known; and calibrating the sensing device based on the known landmarks. The system may have various architectures that include a sensing device which captures images of the scene and further derive 3D data on the scene of some form.



PHOTOGRAPHING DEVICE AND METHOD OF CONTROLLING THE SAME

Thu, 06 Apr 2017 08:00:00 EDT

A method of controlling a photographing device comprising a plurality of image sensors includes generating color image data from a first image sensor photographing a subject and generating grey image data from a second image sensor photographing the subject, extracting a first edge from the color image data and extracting a second edge from the grey image data, combining the first edge with the second edge to generate a third edge, and combining the third edge with the color image data to generate resultant image data.



HD Color Imaging Using Monochromatic CMOS Image Sensors Integrated In 3D Package

Thu, 06 Apr 2017 08:00:00 EDT

HD color video using monochromatic CMOS image sensors integrated in a 3D package is provided. An example 3DIC package for color video includes a beam splitter to partition received light of an image stream into multiple light outputs. Multiple monochromatic CMOS image sensors are each coupled to one of the multiple light outputs to sense a monochromatic image stream at a respective component wavelength of the received light. Each monochromatic CMOS image sensor is specially constructed, doped, controlled, and tuned to its respective wavelength of light. A parallel processing integrator or interposer chip heterogeneously combines the respective monochromatic image streams into a full-spectrum color video stream, including parallel processing of an infrared or ultraviolet stream. The parallel processing of the monochromatic image streams provides reconstruction to HD or 4K HD color video at low light levels. Parallel processing to one interposer chip also enhances speed, spatial resolution, sensitivity, low light performance, and color reconstruction.



PROJECTOR SYSTEM AND CALIBRATION BOARD

Thu, 06 Apr 2017 08:00:00 EDT

This invention provides a simple projector system that can be operated by a user who is not an expert of image processing technology. The projector system comprises a projector (1), a personal computer (2), a mouse (3), and a calibration board (4). A checker flag pattern is added to the calibration board (4), and an intersection point serves as a marker. A cursor, which is projected from the projector (1) onto the calibration board (4), is used as an intuitive input interface. An operator, while seeing the cursor, operates the mouse (3), thereby placing the cursor onto the calibration marker. In this state, the operator clicks the mouse (3), thereby selecting the calibration marker. The operator then acquires the corresponding projection image coordinates on the basis of the selection instruction.



PROJECTOR AND PROJECTOR SYSTEM

Thu, 06 Apr 2017 08:00:00 EDT

A projector according to the present disclosure includes: a projection unit, a lens unit, an imaging unit, and a computing unit. The projection unit displays a first image, The lens unit projects the first image that is displayed by the projection unit. The imaging unit images a second image, which is projected by another projector, through the lens unit. The computing unit computes a distance between a surface on which the second image is projected and the another projector. Such the distance is computed from imaging data of the second image imaged by the imaging unit, and from spacing information on a spacing between the projector concerned and the another projector.



RECONFIGURABLE MOBILE DEVICE

Thu, 06 Apr 2017 08:00:00 EDT

A reconfigurable mobile device is provided. The reconfigurable mobile device includes a first body, a second body that is disposed at a side of the first body and is movable with respect to the first body, a multistage supporter that is provided between the first body and the second body and comprises at least two supporting members that are inserted and received in at least one of the first body and the second body according to movement of the first body and the second body, a screen that is provided in the multistage supporter and is wound or unwound according to the movement of the first body and the second body, and a projector that is provided in at least one of the first body and the second body and configured to project an image toward the screen.



PROJECTION DEVICE

Thu, 06 Apr 2017 08:00:00 EDT

A projection device is provided, which is capable of suppressing or preventing problems that result from a light emission delay of a light source. The projection device includes a plurality of light sources respectively emitting a laser light; a scanning part enabling the laser light to scan; and a controller controlling an output of the laser light. In a scanning range including a first scanning range and a second scanning range, the controller controls such that the output has a first light amount in the first scanning range and has a second light amount which is greater than the first light amount in the second scanning range.



PROJECTION SYSTEMS AND METHODS

Thu, 06 Apr 2017 08:00:00 EDT

Image display apparatus and methods may use a single imaging element such as a digital mirror device (DMD) to spatially modulate plural color channels. A color channel may include a light steering element such as a phase modulator. Steered light from a light steering element may be combined with or replaced by additional light to better display bright images. These technologies may be provided together or applied individually.



Extended Color Processing on Pelican Array Cameras

Thu, 06 Apr 2017 08:00:00 EDT

Systems and methods for extended color processing on Pelican array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating a high resolution image includes obtaining input images, where a first set of images includes information in a first band of visible wavelengths and a second set of images includes information in a second band of visible wavelengths and non-visible wavelengths, determining an initial estimate by combining the first set of images into a first fused image, combining the second set of images into a second fused image, spatially registering the fused images, denoising the fused images using bilateral filters, normalizing the second fused image in the photometric reference space of the first fused image, combining the fused images, determining a high resolution image that when mapped through a forward imaging transformation matches the input images within at least one predetermined criterion.



INTELLIGENT MONITORING DEVICE AND METHOD

Thu, 06 Apr 2017 08:00:00 EDT

An intelligent monitoring device and method is provided in this invention. In the intelligent monitoring device, a video capture unit collects video information using a camera in real time; an audio processing unit is collects audio information using a pickup in real time; an audio/video analysis unit performs a recognition analysis on the video information and the audio information collected in real time and sends an analysis result to a control unit; according to the analysis result sent from the audio/video analysis unit, if a monitored value that is greater than a corresponding threshold value exists in the video information or audio information collected in real time, the control unit triggers a speaker to issue an alarm through the audio processing unit. Through monitoring an on-site environment in real time from different perspectives, it is convenient to comprehensively monitor situations in public places to enable the security department to effectively take early measures to handle situations that endanger public safety and prevent the occurrence of major accidents.



Network Video Recorder Cluster and Method of Operation

Thu, 06 Apr 2017 08:00:00 EDT

A video recorder cluster for use in a video surveillance system includes multiple recorder nodes that can each participate in processing of user-specified operations such as playback, recording, and analysis of the video streams. The video recorder cluster determines the required resources for processing the video data of streams, determines the available resources on each of the recorder nodes, and forwards the video data of the streams to recorder nodes that either include the required resources or include a preferred set of available resources in accordance with the required resources. The video recorder cluster presents a single cluster address for client user devices to access the resources of the video recorder cluster, thereby enabling the video recorder cluster to appear as a single virtual network video recorder to clients.



AUTOMATIC SWITCHING BETWEEN DYNAMIC AND PRESET CAMERA VIEWS IN A VIDEO CONFERENCE ENDPOINT

Thu, 06 Apr 2017 08:00:00 EDT

A video conference endpoint includes a camera to capture video and a microphone array to sense audio. One or more preset views are defined. Images in the captured video are processed with a face detection algorithm to detect faces. Active talkers are detected from the sensed audio. The camera is controlled to capture video from the preset views, and from dynamic views created without user input and which include a dynamic overview and a dynamic close-up view. The camera is controlled to dynamically adjust each of the dynamic views to track changing positions of detected faces over time, and dynamically switch the camera between the preset views, the dynamic overview, and the dynamic close-up view over time based on positions of the detected faces and the detected active talkers relative to the preset views and the dynamic views.



PANORAMIC IMAGE PLACEMENT TO MINIMIZE FULL IMAGE INTERFERENCE

Thu, 06 Apr 2017 08:00:00 EDT

An automatic process for producing professional, directed, production crew quality, video for videoconferencing is described. Rule based logic is integrated into an automatic process for producing director quality video for videoconferencing. An automatic process can include a method for composing a display for use in a video system having an active talker video stream and a panoramic view video stream having more than one person in video. The method can include determining a region of interest in a panoramic view video using motion detection and presence sensors, and preparing the panoramic view video by centering the region of interest and by zooming towards the region of interest, based upon the location of persons in the panoramic view video. The method includes determining placement of panoramic view video on a composite display to prevent the panoramic view video overlaying display of an active talker on the active talker video stream.



OPTIMIZING PANORAMIC IMAGE COMPOSITION

Thu, 06 Apr 2017 08:00:00 EDT

An automatic process for producing professional, directed, production crew quality, video for videoconferencing is described. The process includes a method for positioning a 360 degree panoramic strip view of a room. The method includes receiving motion detection data, accepting presence sensor data, receiving video data from cameras related to a room view, and centering individuals in a room view, based on the received motion detection data, the accepted presence sensor data and received video data. The method also includes zooming onto the centered individuals in the room view, obtaining sound source localization data and active talker information, and determining how to arrange the display of individuals in a logical manner that is visually pleasing and aides understanding. Rule based logic is applied to assist with the automatic processing of video into director quality video production. Various video sources may be used as video stream sources for the automated system.



CONVERSATIONAL PLACEMENT OF SPEAKERS AT ONE ENDPOINT

Thu, 06 Apr 2017 08:00:00 EDT

An automatic process for producing professional, directed, production crew quality, video for videoconferencing is described. Rule based logic is integrated into an automatic process for producing director quality video for videoconferencing. The automatic process uses sensor data to process video streams for video conferencing. A method and system for automatically processing sensor data on room activity into general room analytics for further processing by application of rules based logic to produce production quality video for videoconferencing is described. Sensory devices and equipment, for example motion, infrared, audio, sound source localization (SSL) and video are used to detect room activity or room stimulus. The room activity is analyzed (for example, to determine whether individuals are in the subject room, speaker identification and movement within the room) and processed to produce room analytics. Speaker identification and relative placement information are used to logically depict speakers conversing with one another.



USING THE LOCATION OF A NEAR-END USER IN A VIDEO STREAM TO ADJUST AUDIO SETTINGS OF A FAR-END SYSTEM

Thu, 06 Apr 2017 08:00:00 EDT

A video conferencing system is described that includes a near-end and a far-end system. The near-end system records both audio and video of one or more users proximate to the near-end system. This recorded audio and video is transmitted to the far-end system through the data connection. The video stream and/or one or more settings of the recording camera are analyzed to determine the amount of a video frame occupied by the recorded user(s). The video conferencing system may directly analyze the video frames themselves and/or a zoom setting of the recording camera to determine a ratio or percentage of the video frame occupied by the recorded user(s). By analyzing video frames associated with an audio stream, the video conferencing system may drive a speaker array of the far-end system to more accurately reproduce sound content based on the position of the recorded user in a video frame.



METHOD AND SYSTEM FOR PROVIDING AN AUDIO/VIDEO CONFERENCE

Thu, 06 Apr 2017 08:00:00 EDT

A method and system for providing an audio/video conference includes receiving audio from a moderator via a circuit-switched telephone network, transmitting a representation of the audio to a first listener group via the circuit-switched telephone network, and transmitting a representation of the audio to a second listener group via a packet-switched network. The audio/video conference may be transmitted to the first listener group and the second listener group in real-time or near real-time (e.g., within a few seconds). The method and system may be used with a circuit-switched telephone network such as, for example, a public switched telephone network. Further, the method and system may be used with a packet-switched network such as, for example, the Internet. The method and system further provide synchronization of video data, including slide data, and audio data related to the audio/video conference.



VIDEO INTEGRATION

Thu, 06 Apr 2017 08:00:00 EDT

According to one aspect, a web optimized user device is provided. The web optimized device reduces complexity and facilitates interaction with web-based services and content. The web optimized device can be configured without a hard drive, facilitating integration of web-based services into a computing experience. The web optimized device presents a user interface that integrates video functionality into every aspect of the computer content accessed. In particular, a display manager manages the user interface presented and integrates video displays and features into the content displays in a content and/or context aware manner. These displays permit a user to intuitively interact with the video content and features while the user changes content, for example, web-based services, web-based applications, and other media content, without interruption of or interference from the video content.



VIDEO MANAGEMENT DEFINED EMBEDDED VOICE COMMUNICATION GROUPS

Thu, 06 Apr 2017 08:00:00 EDT

Embodiments include a system, method, and computer program product that enable Push to Talk (PTT) communication within a video management software (VMS) system using embedded PTT controls. An embodiment operates by monitoring a plurality of video feeds using a VMS graphics user interface (GUI) where each video feed is a component of an interactive multimedia media object (IMMO) displayed in the VMS GUI. A talk group is logically associated with a video feed from the plurality of video feeds by coupling the talk group to an embedded PTT control component of the IMMO containing the video feed. Upon detecting that the embedded PTT control component corresponding to the video feed is activated, PTT communications is enabled between each member of the talk group regarding content displayed by the video feed.



Method and Design for Optimum Camera and Display Alignment of Center of the Room Video Conferencing Systems

Thu, 06 Apr 2017 08:00:00 EDT

Systems for videoconferencing are designed for where people are seated around a video conferencing system. The systems include a camera so the far site can see the local participants and the systems include displays that show the far site. The displays are properly aligned with the cameras so that when people at the far site view the displayed images of the near site, it looks like they have eye contact with the near site. Obtaining the alignments of the camera and the displays to provide this apparent eye contact result requires meeting a series of different constraints relating to the various sizes and angles of the components and the locations of the participants.



INTERACTIVE AND SHARED SURFACES

Thu, 06 Apr 2017 08:00:00 EDT

The interactive and shared surface technique described herein employs hardware that can project on any surface, capture color video of that surface, and get depth information of and above the surface while preventing visual feedback (also known as video feedback, video echo, or visual echo). The technique provides N-way sharing of a surface using video compositing. It also provides for automatic calibration of hardware components, including calibration of any projector, RGB camera, depth camera and any microphones employed by the technique. The technique provides object manipulation with physical, visual, audio, and hover gestures and interaction between digital objects displayed on the surface and physical objects placed on or above the surface. It can capture and scan the surface in a manner that captures or scans exactly what the user sees, which includes both local and remote objects, drawings, annotations, hands, and so forth.



Display and Television Set

Thu, 06 Apr 2017 08:00:00 EDT

A display includes a display panel, a board mounting portion, and a circuit board arranged at one surface of the board mounting portion. The display panel also includes a cover portion positioned only at a portion of the one surface of the board mounting portion. The portion includes a region in which the circuit board is arranged. The cover portion includes a bottom surface portion and a first side surface portion that extends from the bottom surface portion toward the one surface of the board mounting portion. The first side surface portion includes a first opening portion.



TELEVISION PROTECTION AND CARRYING DEVICE

Thu, 06 Apr 2017 08:00:00 EDT

The present invention provides methods and systems for a television protection and carrying device that includes a top portion with a base portion, a first end, a second end, and two sides. A bottom portion with a first end, a second end and two sides, and a support device pivotally connected to the top portion having a first position and a second position, wherein the first position the support device is adjacent the top portion and in the second position is spaced-apart from the top portion for positioning the top portion in the vertical position.



SOLID-STATE IMAGING DEVICE AND CAMERA SYSTEM

Thu, 06 Apr 2017 08:00:00 EDT

A solid-state imaging device and a camera system are disclosed. The solid-state imaging device includes a pixel unit and a pixel signal readout circuit. The pixel signal readout circuit includes a plurality of comparators disposed to correspond to a pixel column array, and a plurality of counters. Each counter includes a first amplifier, a second amplifier, and a mirror circuit to from a current mirror in parallel with the second amplifier. The first amplifier includes differential transistors, initializing switches connected between gates and collectors of the differential transistors, and first and second capacitors connected to each of the gates of the differential transistors. The second amplifier includes an initializing switch and a third capacitor. The mirror circuit includes a gate input transistor whose gate is inputted with a voltage sampled by the first amplifier or a voltage sampled by the second amplifier.



ELECTRONIC DEVICE AND METHOD FOR GENERATING IMAGE DATA

Thu, 06 Apr 2017 08:00:00 EDT

A method of an electronic device including an image sensor that acquires an optical signal corresponding to an object and a controller that controls the image sensor, is provided. The method includes identifying a mode for generating an image corresponding to the object by using the optical signal, determining a setting of at least one image attribute to be used for generating the image at least based on the mode, generating image data by using pixel data corresponding to the optical signal at least based on the setting, and displaying the image corresponding to the object through a display functionally connected to the electronic device at least based on the image data.



SOLID STATE IMAGING DEVICE AND IMAGING SYSTEM

Thu, 06 Apr 2017 08:00:00 EDT

Provided is a solid state imaging device including a plurality of pixels, a signal line on which a pixel signal is transmitted, a load transistor having a drain connected to the signal line, a readout circuit that reads out the pixel signal from the signal line, and a control unit that controls a current flowing in the load transistor in accordance with a potential of a control terminal. When a reference potential of the pixel fluctuates relatively to a reference potential of the readout circuit, a potential of the control terminal relative to a potential of a source of the load transistor is changed in a same phase with a fluctuation of the reference potential of the pixel.



OVERSAMPLED IMAGE SENSOR WITH CONDITIONAL PIXEL READOUT

Thu, 06 Apr 2017 08:00:00 EDT

In a pixel array within an integrated-circuit image sensor, each of a plurality of pixels is evaluated to determine whether charge integrated within the pixel in response to incident light exceeds a first threshold. N-bit digital samples corresponding to the charge integrated within at least a subset of the plurality of pixels are generated, and then applied to a lookup table to retrieve respective M-bit digital values (M being less than N), wherein a stepwise range of charge integration levels represented by possible states of the M-bit digital values extends upward from a starting charge integration level that is determined based on the first threshold.



HIGH DYNAMIC RANGE IMAGING PIXELS WITH IMPROVED READOUT

Thu, 06 Apr 2017 08:00:00 EDT

An imaging system may include an image sensor having an array of dual gain pixels. Each pixel may be operated using an improved three read method and an improved four read method such that all signals are read in a high gain configuration in order to prevent electrical offset in signal levels. Each pixel may be operated using an improved three read, two analog to digital conversion (ADC) method in which a frame buffer is used to store calibration data. Each pixel may be operated using an improved three read, three ADC method in which no frame buffer is required. A high dynamic range image signal may be produced for each pixel based on signals read from the pixel and on light conditions.



IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND MEDIUM

Thu, 06 Apr 2017 08:00:00 EDT

An object of the present invention is to provide an image processing apparatus capable of reducing noise in advance that occurs due to color adjustment. The present invention is an image processing apparatus having: a noise amount prediction unit configured to predict a noise amount based on RAW image data acquired by image capturing under fixed image capturing conditions, a color adjustment parameter and the image capturing conditions; an image capturing condition determination unit configured to determine image capturing conditions the contents of which have been changed based on the predicted noise amount; a noise reduction parameter determination unit configured to determine a noise reduction parameter; and a noise reduction unit configured to perform noise reduction processing in accordance with the noise reduction parameter for RAW image data acquired by image capturing under the image capturing conditions determined by the image capturing condition determination unit.



METHOD OF PROCESSING IMAGE OF ELECTRONIC DEVICE AND ELECTRONIC DEVICE THEREOF

Thu, 06 Apr 2017 08:00:00 EDT

According to various embodiments of the present disclosure, an electronic device includes: a camera module that obtains an image; and a processor which implements the method, including setting a quadrangular area in the obtained image including a reference pixel and a corresponding pixel disposed respectively at corners of the quadrangular area, calculating an accumulated-pixel value for each pixel of the obtained image corresponding to the quadrangular area, such that a particular pixel value for a particular pixel is a sum of the pixel values beginning from the reference pixel, continuing though an arrangement of pixels in the quadrangular area and terminating at the particular pixel, and generating an image quality processing-dedicated frame of accumulated-pixel values based on calculated accumulated-pixel values of each pixel of the frame.



METHODS AND APPARATUS FOR FACILITATING SELECTIVE BLURRING OF ONE OR MORE IMAGE PORTIONS

Thu, 06 Apr 2017 08:00:00 EDT

Methods and apparatus provide for simulated, e.g., synthetic, aperture control. In some embodiments a user, after a camera autofocus or user selection of an object to focus on, sets a blur level and views the effect. Once the user controlled blur setting is complete, images are captured with the in-focus, e.g., autofocus, setting and other images are captured with the focus set to capture a blurred image based on the set blur level. In focus and out of focus images are captured in parallel using different camera modules in some embodiments. A composite image is generated by combining portions of one or more sharp and blurred images. Thus a user can capture portions of images with a desired level of blurriness without risking blurring other portions of an image while controlling how the captured images are combined to form a composite image with in-focus and blurred portions.



SPATIAL AND TEMPORAL ALIGNMENT OF VIDEO SEQUENCES

Thu, 06 Apr 2017 08:00:00 EDT

Some embodiments allow a video editor to spatially and temporally align two or more video sequences into a single video sequence. As used in this application, a video sequence is a set of images (e.g., a set of video frames or fields). A video sequence can be from any media, such as broadcast media or recording media (e.g., camera, film, DVD, etc.). Some embodiments are implemented in a video editing application that has a user selectable alignment operation, which when selected aligns two or more video sequences. In some embodiments, the alignment operation identifies a set of pixels in one image (i.e., a “first” image) of a first video sequence and another image (i.e., a “second” image) of a second video sequence. The alignment operation defines a motion function that describes the motion of the set of pixels between the first and second images. The operation then defines an objective function based on the motion function. The operation finds an optimal solution for the objective function. Based on the objective function, the operation identifies a transform, which it then applies to the first image in order to align the first image with the second image.



VIRTUAL FLYING CAMERA SYSTEM

Thu, 06 Apr 2017 08:00:00 EDT

A virtual flying camera system is disclosed. According to one embodiment, the virtual flying camera system includes a plurality of cameras disposed along a track and spaced apart from each other, a recording system configured to receive input video streams from the plurality of cameras, and a control system configured to generate an output video stream from the input video streams. The output video stream includes a view of a virtually flying camera following a target at a user-adjustable speed. The control system generates the output video stream by merging a plurality of portions of the input video streams from adjacent cameras of the plurality of cameras.



External Electronic Viewfinder for SLR Camera

Thu, 06 Apr 2017 08:00:00 EDT

An external electronic viewfinder includes either the lens assembly of the eye-level zoom-in viewfinder (1, 2) and the miniature camera (5), or the lens assembly of the eye-level zoom-in viewfinder (1, 2), the miniature camera (5) and the movable reflector (3). When electronic view-finding is not needed, the viewfinder is used as an optical eye-level zoom-in viewfinder. When it is used as an electronic viewfinder, it is OK to directly move the miniature camera (5) to the optical axis. It is also acceptable to move the movable reflector (3) to the optical axis so that it reflects the images on the miniature camera (5) which then sends them to external display devices (6) via wired or wireless linkage. The photographer can view, compose, focus and remote capture by observing the external display devices.



METHODS AND APPARATUS FOR COMPENSATING FOR MOTION AND/OR CHANGING LIGHT CONDITIONS DURING IMAGE CAPTURE

Thu, 06 Apr 2017 08:00:00 EDT

Methods and apparatus for compensating for motion and/or changing light conditions during image capture, e.g., in video, through use of multiple camera modules and/or images captured by multiple camera modules are described. During image capture time periods a plurality of camera modules capture images. During a first image capture time period a first camera module captures an image including a complete image of a user selected scene area of interest. During an additional image capture time period the first camera module captures an image including a portion of the scene area of interest; however, a portion of the scene area image is missing from the captured image, e.g., due to camera motion, occlusion and/or lighting conditions. Captured images from other camera modules and/or from during different image capture time periods which include the missing portion are identified and ranked; the highest ranked image is used in generating a composite image.



IMAGE PROCESSING APPARATUS AND METHOD

Thu, 06 Apr 2017 08:00:00 EDT

An image processing apparatus comprises: a dividing unit that divides two frame images into a plurality of divided areas; a determination unit that determines a representative point for each of the divided areas in one of the two frame images; a setting unit that, for each of the two frame images, sets image portions for detecting movement between the two frame images, based on the representative points; and a detection unit that detects movement between the two frame images based on correlation values of image signals in the set image portions, wherein for each of the divided areas, the determination unit determines a feature point of the divided area or a predetermined fixed point as the representative point of the divided area in accordance with a position of the divided area in the one frame image.



USER INTERFACE FOR WIDE ANGLE PHOTOGRAPHY

Thu, 06 Apr 2017 08:00:00 EDT

The disclosed technology includes switching between a normal or standard-lens UI and a panoramic or wide-angle photography UI responsive to a zoom gesture. In one implementation, a user gesture corresponding to a “zoom-out” command, when received at a mobile computing device associated with a minimum zoom state, may trigger a switch from a standard lens photo capture UI to a wide-angle photography UI. In another implementation, a user gesture corresponding to a “zoom-in” command, when received at a mobile computing device associated with a nominal wide-angle state, may trigger a switch from a wide-angle photography UI to a standard lens photo capture UI.



IMAGING DEVICE

Thu, 06 Apr 2017 08:00:00 EDT

In a preferred aspect of the invention, an imaging optical system having a wide-angle optical system and a telephoto optical system which are provided in different regions is provided. A directional sensor that has a plurality of pixels including photoelectric conversion elements which are two-dimensionally arranged pupil-divides light beams which are incident through the wide-angle optical system and the telephoto optical system, and selectively receives the light beams. An image acquisition unit acquires a wide-angle image received from the directional sensor through the wide-angle optical system and a telephoto image received from the directional sensor through the telephoto optical system. The directions of the imaging optical axes of the wide-angle optical system and the telephoto optical system of the imaging optical system are different from each other.



Image Generation Method Based On Dual Camera Module And Dual Camera Apparatus

Thu, 06 Apr 2017 08:00:00 EDT

Provided is image generation based on a dual camera module. The dual camera module comprises a first camera lens of a large single-pixel size and a second camera lens of a high resolution. The first camera lens generates a first image. The second camera lens generates a second image. The first image and the second image are synthesized to generate a third image. Correspondingly, also provided is a dual camera module. With the dual camera module, by combining the advantages of the two camera lens, color noise and luminance noise of an image are reduced.



Characterization of a physical object based on its surface roughness

Thu, 06 Apr 2017 08:00:00 EDT

The present invention is directed to a method and apparatus that involves improved characterization of an object based on its surface roughness and other unique features without having to necessarily define a fixed and predetermined region of interest. In accordance with one aspect, the present invention provides a method for characterizing an object based on a pattern of the object's surface roughness. The method comprises the steps of obtaining a unique image of a feature on the surface of the object, converting the image obtained into certain electrical signals and processing the electrical signals so they are associated with the object and thereby provide a characterization of the object that is used to generate a unique identifying signature for the object.



Eye/Head Controls for Camera Pointing

Thu, 06 Apr 2017 08:00:00 EDT

A setting of a video camera is remotely controlled. Video from a video camera is displayed to a user using a video display. At least one eye of the user is imaged as the user is observing the video display, a change in an image of at least one eye of the user is measured over time, and an eye/head activity variable is calculated from the measured change in the image using an eyetracker. The eye/head activity variable is translated into a camera control setting, and an actuator connected to the video camera is instructed to apply the camera control setting to the video camera using a processor.



IMAGE CONTEXT BASED CAMERA CONFIGURATION

Thu, 06 Apr 2017 08:00:00 EDT

An approach to configuring camera settings to reduce the intrusiveness of image capture on image subjects. A preliminary image is analyzed to determine an image context. The image context is compared to intrusiveness context cues, either known or discovered from analyzing historical images associated with the subjects identified in the preview image. If any intrusiveness context cues are found in the image context then configuration parameters associated with the intrusive context cues are changed to minimize the intrusive nature before the image is captured.



IMAGE PROCESSING APPARATUS, ELECTRONIC APPARATUS, DISPLAY PROCESSING APPARATUS, AND METHOD FOR CONTROLLING THE SAME

Thu, 06 Apr 2017 08:00:00 EDT

An image processing apparatus includes a selection unit that selects any of a plurality of items arranged in a first area, a switching unit that switches a mode between a at least first mode, in which an item displayed in the first area selectable, and a second mode, in which the image processing apparatus accepts an operation for an item line including a plurality of items arranged in a second area, and a control unit that performs control to display the item line so that a boundary area between two items included in the item line is not at a predetermined position in the second area in the second mode, and display the item line so that the boundary area is at the predetermined position based on switching to the first mode.



ELECTRONIC APPARATUS

Thu, 06 Apr 2017 08:00:00 EDT

An electronic apparatus is disclosed. The electronic apparatus comprises a first camera and a second camera. The second camera takes a still image while the first camera takes a video.



AUTOFOCUS METHOD FOR MICROSCOPE AND MICROSCOPE COMPRISING AUTOFOCUS DEVICE

Thu, 06 Apr 2017 08:00:00 EDT

A microscope including an objective having a focal plane in a sample space, and an autofocus device comprising a light modulator for generating a luminous modulation object that is intensity-modulated periodically along one direction, an autofocus illumination optical unit that images the modulation object such that its image arises in the sample space, an autofocus camera, an autofocus imaging optical unit that images the image of the modulation object in the sample space onto the autofocus camera, a control device, which receives signals of the autofocus camera and determines an intensity distribution of the image of the modulation object and generates a focus control signal therefrom. The control device determines an intensity distribution of the image of a luminous comparison object imaged by the optical unit to correct the intensity distribution of the image of the modulation object with regard to reflectivity variations in the sample space.



LENS BARREL, REPLACEMENT LENS, IMAGING DEVICE, AND CONTROL PROGRAM

Thu, 06 Apr 2017 08:00:00 EDT

A lens barrel attached to an imaging device, including a focus lens unit that is movable in an optical axis direction; an actuator that moves the focus lens unit along the optical axis direction; a drive control section that controls drive of the actuator; and a setting section that is capable of setting a movement range of the focus lens unit to be one of a first range and a second range that differ from each other with respect to at least one of one end and another end, each range defining the range in which the focus lens unit is allowed to move. Even when the movement range of the focus lens unit is set by the setting section to be one of the first range and the second range, the drive control section removes the setting of the movement range when instructions are received from the imaging device.



METHODS AND APPARATUSES FOR PROVIDING IMPROVED AUTOFOCUS USING CURVE-FITTING

Thu, 06 Apr 2017 08:00:00 EDT

Certain implementations of the disclosed technology may include methods and apparatuses for calculating an optimal lens position for a camera utilizing curve-fitting auto-focus. According to an example implementation, a method (900) is provided. The method (900) may include calculating modulation transfer function values for first and second test image frames associated with respective first and second lens positions of a camera (902, 904). The method may also include identifying, from a database including a plurality of predetermined modulation transfer function curves associated with the camera, a particular predetermined modulation transfer function curve based on the first and second modulation transfer function values (906). The method may also include calculating an optimal lens position for the camera based on the identified particular predetermined modulation transfer function curve (908).



LED SPEAKER IDENTIFIER SYSTEM

Thu, 06 Apr 2017 08:00:00 EDT

A light-emitting diode (LED) speaker identification system that illuminates at least one LED in the direction in which a camera having a 360 degree field of view is recording. The LED speaker identification system includes a base, a printed circuit board coupled to a surface of the base, and a plurality of light-emitting diodes (LEDs) coupled to a surface of the printed circuit board around a circumference thereof. The plurality of LEDs are electrically connected to a camera having a 360 degree viewing angle.



IMAGING DEVICE, IMAGING SYSTEM, IMAGING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM

Thu, 06 Apr 2017 08:00:00 EDT

An imaging device includes a first communication unit, an imaging control unit, and a recording determination unit that determines whether or not an image data is to be recorded according to an operation instruction. The first communication unit transmits recording permission information to an external communication device when a user operation is a recording permission operation, and transmits recording prohibition information to the communication device when the user operation is a recording prohibition operation. The imaging control unit associates image data with determination result determined by the recording determination unit and either one of the recording permission information and the recording prohibition information and controls a recording unit to record the image data.



ROCK CLIMBING WALLS, FALL SAFETY PADS, AND ACCESSORIES

Thu, 06 Apr 2017 08:00:00 EDT

Videography of climbing surfaces and improved positional control of unmanned aerial vehicles (UAV) are disclosed. Currently, videography using UAVs (e.g. drones) avoid vertical structures. Rather, such drones are designed to navigate open aerial spaces. For example, UAVs often include sensors and piloting programming to avoid vertical walls. In climbing scenarios, however, a UAV provides improved videography of the vertical surface as described herein. As such, attributes of a vertical, semi-vertical, or non-uniform vertical surface can be recognized by a UAV. These surface attributes can be used to pilot and/or control a UAV. In certain embodiments, a transmitter is disposed on the climber, a normal climbing surface vector is determined, and UAV control settings are generated thereby to position the UAV. And, improved communication can be provided so as to relay real-time video capture of climbing activity to one or more persons on the ground.



HIGH DYNAMIC RANGE IMAGING PIXELS WITH IMPROVED READOUT

Thu, 06 Apr 2017 08:00:00 EDT

An imaging system may include an image sensor having an array of dual gain pixels. Each pixel may be operated using a two read method such that all signals are read in a high gain configuration in order to improve the speed or to reduce the power consumption of imaging operations. Each pixel may be operated using a two read, two analog-to-digital conversion method in which two sets of calibration data are stored. A high dynamic range (HDR) image signal may be produced for each pixel based on signals read from the pixel and on light conditions. The HDR image may be produced based on a combination of high and low gain signals and one or both of the two sets of calibration data. A system of equations may be used for generating the HDR image. The system of equations may include functions of light intensity.



HIGH DYNAMIC RANGE SOLID STATE IMAGE SENSOR AND CAMERA SYSTEM

Thu, 06 Apr 2017 08:00:00 EDT

A high dynamic range solid state image sensor and camera system are disclosed. In one aspect, the solid state image sensor includes a first wafer including an array of pixels, each of the pixels comprising a photosensor, and a second wafer including an array of readout circuits. Each of the readout circuits is configured to output a readout signal indicative of an amount of light received by a corresponding one of the pixels and each of the readout circuits includes a counter. Each of the counters is configured to increment in response to the corresponding photosensor receiving an amount of light that is greater than a photosensor threshold. Each of the readout circuits is configured to generate the readout signal based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel.



OPTICAL SCANNING ENDOSCOPE APPARATUS

Thu, 06 Apr 2017 08:00:00 EDT

The disclosed optical scanning endoscope apparatus includes a scanner that scans an object with illumination light from a light source, a light amount detector that detects a light amount of the illumination light from the light source, and a controller that controls output of the light source based on the light amount detected by the light amount detector. The controller controls the output of the light source based on an integral value of the light amount detected by the light amount detector during an immediately prior predetermined period.



Camera Module Having a Sealing Member

Thu, 06 Apr 2017 08:00:00 EDT

A camera module is provided. The camera module includes: a printed circuit board on which an image sensor is to be mounted; a base arranged on an upper side of the printed circuit board; a cover can coupled to an upper side of the base; and a sealing member interposed between the base and the cover can.



Auxiliary Lens for Mobile Photography

Thu, 06 Apr 2017 08:00:00 EDT

An auxiliary optical assembly for a camera-enabled mobile device includes a removable lens assembly including a lens holder, a lens coupled to the lens holder, and a coupling interface. A lens attachment interface that is configured to stably couple to the mobile device, and that is configured in accordance with the coupling interface of the removable lens assembly to stably couple and align the removable lens along the optical path of the miniature camera module.



Array Lens Module

Thu, 06 Apr 2017 08:00:00 EDT

An array lens module includes a housing, an image sensor with photosensitive area and a lens module installed inside the housing. The lens module is formed by at least two pieces glass lens. The first lens with the first imaging area and the second lens with the second imaging area are molded on the lens module. The image processor is respectively capturing the first image area and the second image area by a certain length-width ratio in the first imaging area and the second imaging area. A parallax between image in the first image area and image in the second image area is accordingly generated. The lens module is all-glass structure with a high transmittance and excellent achromatization performances.



Camera Module

Thu, 06 Apr 2017 08:00:00 EDT

The present invention relates to a camera module, the module including: a PCB; an image sensor mounted on the PCB and formed with an image pickup device; a base mounted on the PCB and including a plated portion formed at a lower center with an opening mounted with an IR filter; a lower spring plate formed with a conductive material; a spacer arranged on an upper surface of the lower spring plate and forming a staircase structure by a rib wrapping a periphery of the lower spring plate to supportively apply a pressure to the lower spring plate; a lens actuator including a bobbin, and a yoke; an upper spring plate coupled to an upper surface of the lens actuator; and a cover arranged on an upper surface of the upper spring plate.



LENS BRACKET ASSEMBLY AND GIMBAL USED THEREWITH

Thu, 06 Apr 2017 08:00:00 EDT

The present invention discloses a lens bracket assembly for supporting an imaging device. The imaging device includes a body and a lens connected to the body. The lens bracket assembly includes a supporting plate and a bracket. The supporting plate is used for mounting the imaging device, and the supporting plate includes a first side. The bracket includes a supporting portion of which the shape matches the lens of the imaging device and a fixing portion connected with the supporting portion. The fixing portion is fixedly arranged at the first side. The supporting portion is used for supporting the lens. The present invention further relates to a gimbal that uses the lens bracket assembly.



Seamless Video Pipeline Transition Between WiFi and Cellular Connections for Real-Time Applications on Mobile Devices

Thu, 06 Apr 2017 08:00:00 EDT

Performing a real-time application on a mobile device, involving communication of audio/video packets with a remote device. The mobile device may initially communicate the audio/video packets on a first communication channel with the remote device. During the real-time communication, the mobile device may determine if no packets have been received by the mobile device from the remote device for a first threshold period of time. If no packets have been received by the mobile device from the remote device for the first threshold period of time, the mobile device may establish a second communication channel for transmission of the audio/video packets with the remote device. In response to using the second communication channel, the mobile device may modify a resolution or bit rate of the audio/video packets transmitted to the remote device.



FILTERING SOUNDS FOR CONFERENCING APPLICATIONS

Thu, 06 Apr 2017 08:00:00 EDT

A conferencing system includes a display device that displays video received from a remote communication device of a communication partner. An audio stream is transmitted to the remote communication device. The audio stream includes real-world sounds produced by one or more real-world audio sources captured by a microphone array and virtual sounds produced by one or more virtual audio sources. A relative volume of sounds in the audio stream is selectively adjusted based, at least in part, on real-world positioning of corresponding audio sources, including real-world and/or virtualized audio sources.



CONTROL DEVICE FOR RECORDING SYSTEM, AND RECORDING SYSTEM

Thu, 06 Apr 2017 08:00:00 EDT

A control device for a recording system includes a sound acquisition unit, a recording control unit, an item management unit and a presentation control unit. The sound acquisition unit acquires sound data from a sound pickup device. The recording control unit records information based on the sound data as recording information corresponding to one of predetermined items. The item management unit extracts one of the items, in which the recording information has not yet been recorded, as an uninput item. The presentation control unit causes a presentation device to show the uninput item.



APPARATUS, METHOD AND MOBILE TERMINAL FOR PROVIDING OBJECT LOSS PREVENTION SERVICE IN VEHICLE

Thu, 06 Apr 2017 08:00:00 EDT

An apparatus for providing an object loss prevention service in a vehicle, and which includes a sensor unit configured to sense an in-vehicle object of a passenger inside the vehicle; and a processor configured to display in-vehicle object state information including at least one of a position of the object and a type of the object, and output an alarm indicating the object has been left in the vehicle in response to the passenger getting out of the vehicle.



LOCALIZATION OF A ROBOT IN AN ENVIRONMENT USING DETECTED EDGES OF A CAMERA IMAGE FROM A CAMERA OF THE ROBOT AND DETECTED EDGES DERIVED FROM A THREE-DIMENSIONAL MODEL OF THE ENVIRONMENT

Thu, 06 Apr 2017 08:00:00 EDT

Methods, apparatus, systems, and computer-readable media are provided for using a camera of a robot to capture an image of the robot's environment, detecting edges in the image, and localizing the robot based on comparing the detected edges in the image to edges derived from a three-dimensional (“3D”) model of the robot's environment from the point of view of an estimated pose of the robot in the environment. In some implementations, the edges are derived based on rendering, from the 3D model of the environment, a model image of the environment from the point of view of the estimated pose—and applying an edge detector to the rendered model image to detect model image edges from the model image.



CAMERA CALIBRATION USING SYNTHETIC IMAGES

Thu, 06 Apr 2017 08:00:00 EDT

A camera is to capture an actual image of a target pattern. A calibration device is to render pixels in a synthetic image of the target pattern by tracing rays from the pixels to corresponding points on the target pattern based on model parameters for a camera. The calibration device is to also modify the model parameters to minimize a measure of distance between intensities of the pixels in the synthetic image and intensities of pixels in the actual image.



CAMERA-BASED SPEED ESTIMATION AND SYSTEM CALIBRATION THEREFOR

Thu, 06 Apr 2017 08:00:00 EDT

Provided is a visual vehicle speed estimation system based on camera output and calibration for such a vehicle speed estimation system. The calibration allows use of the system where the absolute camera position is unknown. Calibration determines an absolute-position-independent relationship between an image space placement and a physical space position relative to the camera. The calibration is done on the basis of camera output using vehicle features of known dimensions and some assumed physical constraints related thereto to provide a conversion relationship between image coordinates and physical space coordinates in a physical space defined in relation to the camera. This relationship is then used to estimate vehicle speeds based only on the visual information provided by the camera. Abstract is not to be interpreted as limiting.



VIDEO FLOW ANALYSING METHOD AND CAMERA DEVICE WITH VIDEO FLOW ANALYSING FUNCTION

Thu, 06 Apr 2017 08:00:00 EDT

A video flow analyzing method and a related camera device are applied to determine whether an object passes through a monitoring area. The video flow analyzing method includes drawing two boundaries on a video image correlative to the monitoring area to form a counting path, utilizing endpoints of the two boundaries to define an inlet and an outlet of the counting path, setting an initial point while the object moves into the counting path by crossing one of the boundaries, the inlet and the outlet, setting a final point while the object moves out of the counting path by crossing one of the boundaries, the inlet and the outlet, and utilizing the initial point and the final point to determine whether the object passes through the counting path.



METHOD AND DEVICE FOR CONVERTING A COLOR FILTER ARRAY

Thu, 06 Apr 2017 08:00:00 EDT

Provided are a method and a device for converting a White-Red-Green-Blue (WRGB) color filter array into a Red-Green-Blue (RGB) color filter array in order to be easily applied to a commercial digital camera. The method includes (a) correcting a color of a White-Red-Green-Blue (WRGB) color filter array, (b) converting the WRGB color filter array into a Red-Green-Blue (RGB) color filter array, and (c) correcting a green of the RGB color filter array by using multichannel color difference value.



METHOD AND A SYSTEM FOR IDENTIFYING REFLECTIVE SURFACES IN A SCENE

Thu, 06 Apr 2017 08:00:00 EDT

Methods and a system for identifying reflective surfaces in a scene are provided herein. The system may include a sensing device configured to capture a scene. The system may further include a storage device configured to store three-dimensional positions of at least some of the objects in the scene. The system may further include a computer processor configured to attempt to obtain a reflective surface representation for one or more candidate surfaces selected from the surfaces in the scene. In a case that the attempted obtaining is successful, computer processor is further configured to determine that the candidate reflective surface is indeed a reflective surface defined by the obtained surface representation. According to some embodiments of the present invention, in a case the attempted calculation is unsuccessful, determining that the recognized portion of the object is an object that is independent of the stored objects.



APPARATUS AND METHOD FOR PROVIDING ATTITUDE REFERENCE FOR VEHICLE PASSENGERS

Thu, 06 Apr 2017 08:00:00 EDT

In one aspect, the present disclosure relates to video system and methods for emulating a view through an aircraft window to a passenger in an interior passenger suite. The view may be emulated by determining a perspective view of the seated passenger relative to each monitor of at least one monitor mounted to a side wall of the interior passenger suite, and capturing video data of scenery exterior to the aircraft at the perspective view(s) for display on the monitor(s).



PROCESSING APPARATUS, PROCESSING SYSTEM, PROCESSING PROGRAM, AND PROCESSING METHOD

Thu, 06 Apr 2017 08:00:00 EDT

A processing apparatus includes a distance image acquirer, a moving-object detector, a background recognizer. The distance image acquirer acquires a distance image containing distance information of each pixel. The moving-object detector detects a moving object from the distance image. The background recognizer generates a background model in which a stationary object recognized as background of the moving object is modeled, from the distance image acquired by the distance image acquirer. The moving-object detector changes a method for detecting the moving object, based on a relative positional relation between the moving object and the back-ground model.



SYSTEMS AND METHODS FOR DETECTING AN OBJECT

Thu, 06 Apr 2017 08:00:00 EDT

Systems and methods are provided for detecting an object in front of a vehicle. In one implementation, an object detecting system includes an image capture device configured to acquire a plurality of images of an area, a data interface, and a processing device programmed to compare a first image to a second image to determine displacement vectors between pixels. The processing device is also programmed to search for a region of coherent expansion that is a set of pixels in at least one of the first image and the second image, for which there exists a common focus of expansion and a common scale magnitude such that the set of pixels satisfy a relationship between pixel positions, displacement vectors, the common focus of expansion, and the common scale magnitude. The processing device is further programmed to identify presence of a substantially upright object based on the set of pixels.



VEHICLE DETECTION WARNING DEVICE AND VEHICLE DETECTION WARNING METHOD

Thu, 06 Apr 2017 08:00:00 EDT

A vehicle detection warning device may have a rear camera, a vehicle detecting unit that detects a rear vehicle according to high-brightness regions included in obtained images, a warning start deciding unit that causes a warning processing unit to start a warning when the rear vehicle is detected, a summarizing unit that counts the number of times the high-brightness region is included in images obtained by a plurality of imaging operations performed by the rear camera, an area calculating unit that calculates the area of the high-brightness region, and a warning termination deciding unit that causes the warning processing unit to stop the warning when the number of times counted by the summarizing unit decreases and the area calculated by the area calculating unit falls to or below a predetermined value.



MEASURING DEVICE, MEASURING SYSTEM, MEASURING METHOD, AND PROGRAM

Thu, 06 Apr 2017 08:00:00 EDT

A measuring device includes a data acquisition unit that acquires measurement data, including width direction acceleration of a road surface on which a moving object moves, from an acceleration sensor provided on a structure having the road surface, and a moving object information acquisition unit that acquires information relating to the moving object moving on the road surface, on the basis of the width direction acceleration.



METHOD AND SYSTEM FOR DETECTING AND PRESENTING VIDEO FEED

Thu, 06 Apr 2017 08:00:00 EDT

A computing system device with processor(s) and memory displays a video monitoring user interface on the display. The video monitoring user interface includes a first region for displaying a live video feed and/or a recorded video feed from the video camera and a second region for displaying a single event timeline. A current video feed indicator is movable on the timeline for indicating the temporal position of the video feed displayed in the first region. The temporal position includes a past time and a current time corresponding to the previously recorded video feed and the live video feed, respectively. While the current video feed indicator is moved to indicate the temporal position of the video feed displayed in the first region, video segments corresponding to the one or more events has a higher priority for display in the first region than video segments that do not contain any event.



3D IMAGE SYSTEM, METHOD, AND APPLICATIONS

Thu, 06 Apr 2017 08:00:00 EDT

Apparatus (systems) and methods are described for generating and displaying a virtual true 3D image viewable by a single eye of a viewer. A fixed-curve mirror is translated along an optical axis in synchrony with a temporally-modulated 2D image generator to generate a virtual image in multiple virtual image planes, enabling depth perception of the image by a single eye of the viewer.



EXAMINING DEVICE AND METHOD FOR EXAMINING INNER WALLS OF A HOLLOW BODY

Thu, 06 Apr 2017 08:00:00 EDT

An examining device for examining inner walls of a hollow body comprises: a rod-shaped camera device designed to record an image transversely with respect to its longitudinal axis; adjustment means for moving the camera device into and out of the hollow body; a grazing light illumination device for illuminating the inner walls and having emission directions that are transverse with respect to receiving directions, from which the camera device receives light from the illuminated inner walls, wherein an angle between the emission and receiving directions is between 45° and 135°; diameter determination means for determining an inner diameter of a cavity of the hollow body and comprising a light source and optical measuring means. Furthermore, a corresponding method for examining inner walls is described.



TEST APPARATUS FOR CHECKING CONTAINER PRODUCTS

Thu, 06 Apr 2017 08:00:00 EDT

A test apparatus for checking container products (13) which are preferably composed of plastic materials and are produced using the blow-moulding, filling and sealing methods, which are filled with fluid which, for production-related reasons, can contain particulate contamination which is deposited on the container wall when the container (13) is still and which floats freely in the fluid when the container (13) is moving and/or which changes position owing to the movement and in this way can be detected by means of a sensor device (37), is characterized in that, by means of a vibration device (23), the respective container (13) can be made to oscillate at a prespecifiable excitation frequency in such a way that the respective particulate contamination (47) in the fluid can be detected.



ELECTRONIC IMAGING FLOW-MICROSCOPE FOR ENVIRONMENTAL REMOTE SENSING, BIOREACTOR PROCESS MONITORING, AND OPTICAL MICROSCOPIC TOMOGRAPHY

Thu, 06 Apr 2017 08:00:00 EDT

An electronic imaging flow-microscope for remote environmental sensing, bioreactor process monitoring, and optical microscopic tomography applications is described. A fluid conduit has a port on each end of a thin flat transparent fluid transport region. A planar illumination surface contacts one flat side of the transparent fluid transport region and a planar image sensing surface contacts the other flat side. Light from the illumination surface travels through the transparent fluid transport region to the planar image sensing surface, producing a light field affected by the fluid and objects present. The planar image sensing surface creates electrical image signals responsive to the light field. The planar illumination surface can be light emitting elements such as LEDs, OLEDs, or OLET whose illumination can be sequenced in an image formation process. The flow microscope can further comprise flow-restricting valves, pumps, energy harvesting arrangements, and power management.



SYSTEM AND METHOD FOR TESTING ULTRASOUND TRANSDUCER

Thu, 06 Apr 2017 08:00:00 EDT

An apparatus for testing an ultrasound device having an ultrasound transducer and a controller includes: a housing; an absorbing layer inside the housing, wherein the absorbing layer is configured to receive ultrasound energy from the ultrasound transducer; and a thermal camera for detecting temperature at the absorbing layer. A method for testing an ultrasound device having an ultrasound transducer and a controller, includes: operating the ultrasound transducer to deliver ultrasound energy towards an absorbing layer at a testing apparatus using a thermal camera to detect temperature at the absorbing layer; obtaining thermal image data from the camera; and analyzing the thermal image data to determine whether the ultrasound device is operating desirably.



USING PHOTOGRAMMETRY TO AID IDENTIFICATION AND ASSEMBLY OF PRODUCT PARTS

Thu, 06 Apr 2017 08:00:00 EDT

A user may be aided in modifying a product that is an assemblage of parts. This aid may involve a processor obtaining images of a target part captured by the user on a mobile device camera. The processor may compare, based on the captured images and a plurality of images of identified parts, the target part to the identified parts. Based on the comparison, the processor may determine an identity of the target part. This aid may also involve a processor obtaining images of a first configuration of a partial assembly of the product captured by a mobile device camera. The processor may compare, based on the captured images, the first configuration to a correct configuration of the partial assembly. Based on the comparison, the processor may determine that the first configuration does not match the correct configuration and may notify the user accordingly.



SELF-CALIBRATING WHEEL ALIGNER WITH IMPROVED PORTABILITY

Thu, 06 Apr 2017 08:00:00 EDT

A portable vehicle alignment system has a vertical central column with a carriage movable along its length, and a pair of camera arms pivotably attached to the carriage, each with a camera pod. The camera pods each have a camera for capturing image data of a respective vehicle-mounted target. One pod also has a calibration target disposed in a known relationship to its camera, and the other pod has a calibration camera disposed in a known relationship to its camera for capturing images of the calibration target. The camera arms pivot between an extended position where the cameras are disposed to capture image data of the vehicle targets and the calibration camera is disposed to capture images of the calibration target, and a folded position where the aligner has a width smaller than the width between the camera pods.



Construction Site Monitoring System

Thu, 06 Apr 2017 08:00:00 EDT

A job site monitoring system includes a tower-mounted scanner that is situated to monitor all or substantially all of a particular construction site. The scanner is configured to provide data to a processor configured to determine the height of fill material deposited at a job site. The processor is adapted to communicate with a remote client who can review the data collected at the job site. The processor may also be configured to compare the live fill height data to predetermined parameters to determine whether an error condition exists. An alert may be issued upon detection of such an error to enable corrective action to be taken before construction continues at the site.



LIGHT REFERENCE SYSTEM

Thu, 06 Apr 2017 08:00:00 EDT

A light reference system includes a first, second and third rail followers. The second rail follower is disposed between the first and third rail followers. First and second light sources are disposed on the first and third rail followers and operable to emit light towards the second rail follower. First and Second imaging devices are disposed on the second rail follower. The first imaging device is operable to receive the light emitted by the first light source and provide first image data. The second imaging device is operable to receive the light emitted by the second light source and provide second image data. A processing device is configured to perform a measurement, based on the first and second image data, indicating a relative position of the second rail follower with respect to at least one of the first and third rail followers.



System and Method for Inspecting Road Surfaces

Thu, 06 Apr 2017 08:00:00 EDT

A vehicle includes at least one infrared source emitting light at first and second wavelengths corresponding to a water-absorption wavelength and an ice-absorption wavelength respectively. The vehicle further includes a plenoptic camera system configured to detect a backscatter intensity of the first and second wavelengths and generate a depth map that indicates water or ice on a road in response to the backscatter intensity associated with one of the wavelengths being less than a threshold intensity.



REAR-VIEW VERIFICATION DEVICE, AND AUTOMOBILE EQUIPPED WITH SAME

Thu, 06 Apr 2017 08:00:00 EDT

A rear-view verification apparatus includes a display apparatus and a camera apparatus. The display apparatus includes a display device, a first controller, a first transmitter, and a first receiver. The first controller causes the display device to display an image in accordance with an image signal fed from the camera apparatus. The camera apparatus includes a second receiver, a second transmitter, a second controller, and an imaging element. The first controller transmits a synchronization signal to the second controller via the first transmitter and the second receiver. The second controller transmits the image signal to the first controller via the second transmitter and the first receiver. The first controller transmits the image signal to the display device.



VIDEO SYNTHESIS SYSTEM, VIDEO SYNTHESIS DEVICE, AND VIDEO SYNTHESIS METHOD

Thu, 06 Apr 2017 08:00:00 EDT

A video display system performs a video conversion process on a video of a camera mounted on a vehicle and displays a resulting video, and includes a plurality of cameras, a detecting unit that detects an object of interest around the vehicle based on information or the like acquired through the plurality of cameras, other sensors, or a network, a transforming/synthesizing unit that transforms and synthesizes the videos photographed by the plurality of cameras using a shape of a virtual projection plane, a virtual viewpoint, and a synthesis method which are decided according to position information of the object of interest detected by the detecting unit, and a display unit that displays the video that is transformed and synthesized by the transforming/synthesizing unit.



ANGLED ENDOSCOPE TIP IMAGE CAPTURE UNIT

Thu, 06 Apr 2017 08:00:00 EDT

A family of endoscopes includes a non-zero degree endoscope (210) and a zero degree endoscope (200). Zero degree endoscope (210) includes a first image capture unit (220A) mounted in a distal portion (270A) of the zero degree endoscope with a lengthwise axis of the first image capture unit substantially parallel to a lengthwise axis of that distal portion. Non-zero degree endoscope (210) includes a second image capture unit (220B) mounted in a distal portion (270B) of the non-zero degree endoscope with a lengthwise axis of the second image capture unit intersecting a lengthwise axis of that distal portion at a non-zero angle α. The first and second image capture units have substantially identical non-folded optical paths.



ENDOSCOPE SYSTEM

Thu, 06 Apr 2017 08:00:00 EDT

An endoscope system includes a head portion, a connector portion and a CCU. The head portion includes a test signal generating portion configured to generate a first test pattern signal. The connector portion includes: a test signal generating portion configured to generate a second test pattern signal which is a same pattern signal as the first test pattern; a first comparison circuit configured to output a comparison result of comparing the first test pattern signal with the second test pattern signal; and a test signal generating portion configured to generate a third test pattern signal. The CCU includes: a test signal generating portion configured to generate a fourth test pattern signal which is a same pattern signal as the third test pattern; and a second comparison circuit configured to output a comparison result of comparing the third test pattern signal with the fourth test pattern signal.