Thu, 27 Oct 2016 08:00:00 EDTA computing device with processor(s) and memory has a video monitoring user interface for displaying a video feed on a display of a client system. When events are detected in the video feed, an events feed is displayed in the video monitoring user interface to present the detected events. For each detected event, the events feed includes a visual representation of the video feed that was recorded at the time of the respective event, an event characteristic indicator indicating a characteristic of the respective event, and a time indicator indicating the time at which the event occurred. Then, in response to detecting the user selection of one of the events included in the events feed, the computing device records the recorded video feed that was recorded during the selected event is recorded, and displays the requested recorded video feed on the video monitoring user interface.
Thu, 27 Oct 2016 08:00:00 EDTAccording to an embodiment of the disclosure, a system for displaying streamed video from a distance comprises one or more capturing devices and one or more servers. Each of the one more capturing device have a plurality of sensors configured to capture light used in forming image frames for a video stream. The plurality of sensors are arranged around a shape to capture the light at different focal points and at different angles. The one or more servers are configured to receive light data from the one or more capturing devices, and to provide a dynamically selected subset of the light data captured by the plurality of sensors to a remote end user as a stream of image frames for a video stream. The subset of the light data provided by the one more servers at a particular instance depend on selections from the end user.
Thu, 27 Oct 2016 08:00:00 EDTA method for processing an image in a video device, comprises reading an image and combining the image with metadata related to the image by embedding the metadata in or with the image. The method further includes combining transforming the image and extracting the metadata from the image, before encoding the image in an encoder and utilizing the metadata as input in further processing.
Thu, 27 Oct 2016 08:00:00 EDTA camera intrinsic calibration may be performed using an object geometry. An intrinsic camera matrix may then be recovered. A homography is fit between object and camera coordinate systems. View transformations are finally recovered.
Thu, 27 Oct 2016 08:00:00 EDTThere is provided an information processing apparatus, including a display control section which determines which display layers out of a plurality of mutually overlapping display layers information is to be displayed on based on parameters associated with the information.
Thu, 27 Oct 2016 08:00:00 EDTStereoscopic display technologies are provided. A computing device generates a stereoscopic display of an object by coordinating a first image and a second image. To reduce discomfort or to reduce diplopic content, the computing device may adjust at least one display property of the first image and/or the second image depending on one or more factors. The factors may include a time period associated with the display of the object, the vergence distance to the object, the distance to the focal plane of the display, contextual data interpreted from the images and/or any combination of these and other factors. Adjustments to the display properties can include a modification of one or more contrast properties and/or other modifications to the images. The adjustment to the display properties may be applied with varying levels of intensity and/or be applied at different times depending on one or more factors and/or contextual information.
Thu, 27 Oct 2016 08:00:00 EDTA light control and display technology applicable to light redirection and projection with the capacity, in a number of embodiments modified for particular applications, to produce managed light, including advanced images. Applications include miniature to very large scale video displays, optical data processing, 3-dimensional imaging, and lens-less vision enhancement for poor night-driving vision, cataracts and macular degeneration.
Thu, 27 Oct 2016 08:00:00 EDTIn embodiments of selective illumination of a region within a field of view, an illumination system includes light sources implemented for selective illumination of a target within a field of view of an imaging system. The illumination system also includes optics that can be positioned to direct light that is generated by a subset of the light sources to illuminate a region within the field of view. An imaging application can activate the subset of the light sources and position the optics to illuminate the region within the field of view that includes the target of the selective illumination.
Thu, 27 Oct 2016 08:00:00 EDTRepresentative implementations of devices and techniques provide adaptive calibration and compensation of wiggling error for imaging devices and systems. In various implementations, the wiggling error is modelled using identified parameters. For example, the parameters (or coefficients) of the wiggling error are estimated using the model. In an implementation, the imaging system uses the derived model to adaptively compensate for the wiggling error during runtime.
Thu, 27 Oct 2016 08:00:00 EDTA method is for manufacturing laminated glass. The method includes: cutting a first glass and a second glass out of one sheet of glass having a corrugation formed in a predetermined direction, in such a manner that corresponding sides of the first glass and the second glass are cut in a same cutting direction; and bonding the first glass and the second glass together in such a manner that the corrugation of the first glass and the corrugation of the second glass are aligned with each other.
Thu, 27 Oct 2016 08:00:00 EDTAn imaging device has a light field imager and a processor. The light field imager includes a microlens array, and a light field sensor positioned proximate to the microlens array, having a plurality of pixels and recording light field data of an optical target from light passing through the microlens array. The processor is configured to: receive the light field data of the optical target from the light field sensor, estimate signal to noise ratio and depth of the optical target in the light field data, select a subset of sub-aperture images based on the signal to noise ratio and depth, combine the selected subset of sub-aperture images, and perform image analysis on the combined subset of sub-aperture images.
Thu, 27 Oct 2016 08:00:00 EDTProvided is a 3D camera module, comprising: a first camera lens group for forming a light path of a first light bundle so as to be one of a left-eye lens group and a right-eye lens group in the 3D camera module; a second camera lens group, disposed apart from the first camera lens group, for forming a light path of a second light bundle so as to be the other of the left-eye lens group and the right-eye lens group in the 3D camera module; an image sensor disposed on the movement path of the first and second light bundles so as to sense a first image by the first light bundle and a second image by the second light bundle; and a movement control unit configured to move the first camera lens group and the second camera lens group in at least one direction of a forward and backward direction and a left and right direction so as to control the first and second images.
Thu, 27 Oct 2016 08:00:00 EDTTeleconferencing is performed between two telecommunication devices having a display device and a stereoscopic pair of cameras positioned outside opposed sides of the display device at the same level partway along those sides. The separation between the centers of the cameras is in a range having a lower limit of 60 mm and an upper limit of 110 mm to improve the perceived roundness in a displayed stereoscopic image of a head. In captured stereo images that are video images, a head is segmented and the segmented backgrounds are replaced by replacement images that have a lower degree of perceived stereoscopic depth to compensate for non-linear depth perception in the displayed stereo images. Images are shifted vertically to position an eye-line of a detected face at the level of the stereoscopic pair of cameras of the telecommunication device where the images are displayed, improving the naturalness of the displayed image.
Thu, 27 Oct 2016 08:00:00 EDTTo allow re-producing, in another image processing apparatus, contents of image processing in an image processing apparatus, a provisional color grading apparatus 300 determines normalizing points (CodeValues that serve as references for normalizing) of input image signals according to format information and normalizes the input image signals. The provisional color grading apparatus 300 records values (normalizing information) obtained by converting the normalizing points into numerical values independent of devices, in association with parameters of color grading.
Thu, 27 Oct 2016 08:00:00 EDTVisual images projected on a projection surface by a projector provide an interactive user interface having end user inputs detected by a detection device, such as a depth camera. The detection device monitors projected images initiated in response to user inputs to determine calibration deviations, such as by comparing the distance between where a user makes an input and where the input is projected. Calibration is performed to align the projected outputs and detected inputs. The calibration may include a coordinate system anchored by its origin to a physical reference point of the projection surface, such as a display mat or desktop edge.
Thu, 27 Oct 2016 08:00:00 EDTAn integrated processing and projection device suitable for use on a supporting surface includes a processor and a projector designed to provide a display on the supporting surface proximate to the device. Various sensors enable object and gesture detection in a detection area in the display area. Trigger zones are defined in the detection area such that interaction of an object or human limb in the detection zone provides object and zone specific feedback by the integrated processing and projection device. The feedback can be provided in the projection area or may be provided as audible or active feedback to a device having active feedback capabilities.
Thu, 27 Oct 2016 08:00:00 EDTAn electronic device including an optical module, a method of operating the electronic device including the optical module, and a non-transitory computer-readable recording medium having recorded thereon a program for performing the method. The electronic device includes an optical module configured to project content on a projection surface and a processor configured to determine whether the electronic device is positioned within a predetermined range of a projection surface and control the optical module to project the content onto the projection surface based on the determination.
Thu, 27 Oct 2016 08:00:00 EDTA projector includes a DMD that forms an image, a projection lens that projects the image formed by the DMD on a projected surface, and a display control unit that controls an image forming operation of the DMD. The display control unit forms a frame image by sequentially forming a plurality of images using a combined pixel which is configured of a plurality of pixels, and with respect to temporally continuous two images, the display control unit forms one image at a position that is shifted by a distance that corresponds to the pixel pitch of the DMD in a predetermined direction relative to the other image.
Thu, 27 Oct 2016 08:00:00 EDTThis type of headphone with retractable monocle video screen is able to pull down the retractable monocle video screen when it is needed and retract it when it is not. The screen display is controlled via a remote smart device. The viewing screen of the retractable video screen is simply a projection of the user's smartphone, smart watch, or tablet screen and is connected via a Bluetooth connection created between the headphone device and the satellite device. Thus, the satellite devices act as a remote control and there is no need for a computing system to be built into the headphones.
Thu, 27 Oct 2016 08:00:00 EDTAn imaging apparatus and an image sensor including the same are provided. The imaging apparatus includes first, second, and third optical devices. At least one of the first, second, and third optical devices is a thin-lens including nanostructures.
Thu, 27 Oct 2016 08:00:00 EDTVehicle event recorders are arranged with integrated web servers to provide a simple user interface and control mechanism which may be address with commonly available hardware and software. A vehicle event recorder of these inventions couples to a network having a workstation node. The workstation having either of the many available web browsers can be used to view, address, control, perform data transfer, et cetera, by way of data exchange in accordance with simple IP protocols. A vehicle equipped with these systems returns to a household to make a network connection. A local server is used to see all exposed system controls as provided by predefined web pages provided by a web server integrated as part of the vehicle event recorder unit.
Thu, 27 Oct 2016 08:00:00 EDTA method and system for controlling access to access points of an establishments are described. A call may be initiated from an access box to a smart device thereby activating the smart device to stream video via the Internet from an IP camera located at the access point. An end user of the smart device may communicate with and control access to the access point using the smart device.
Thu, 27 Oct 2016 08:00:00 EDTIn order to reduce a possibility that it is not straightforward to preview data of a photograph taken on a camera terminal and the data of a photograph taken be leaked due to loss of the camera terminal or the like, a terminal control means for transmitting data of a photograph with a prescribed identifier or a photograph display request signal with the prescribed identifier in recognizable state, and a display means for displaying photograph data are provided.
Thu, 27 Oct 2016 08:00:00 EDTA computing system device with processor(s) and memory displays a video monitoring user interface on the display. The video monitoring user interface includes a first region for displaying a live video feed and/or a recorded video feed from the video camera and a second region for displaying a event timeline. The event timeline includes a plurality of equally spaced time indicators each indicating a specific time and a current video feed indicator indicating the temporal position of the video feed displayed in the first region. The temporal position includes a past time corresponding to the previously recorded video feed from the video camera and a current time corresponding to the live video feed from the video camera. The current video feed indicator is movable relative to the equally spaced time indicators to facilitate a change in the temporal position of the video feed displayed in the first region.
Thu, 27 Oct 2016 08:00:00 EDTThe present invention discloses a networked communication system. The networked communication system comprises a multipoint control unit (MCU) operating a MCU server to receive and process multimedia data transmitted from a plurality of networked communication devices functioning as clients of the MCU. The multimedia data transmitted from each of the clients further include meta-data identifying each of the clients transmitting the multimedia data and the MCU server further processes the multimedia data according to the meta-data to position the multimedia data in a plurality of frames for real-time multiple party display for each of the clients.
Thu, 27 Oct 2016 08:00:00 EDTA method of facilitating a multi-participant communication session with an efficient use of network bandwidth and processing resources is described. The processing resources are made available to the session participants via a switching fabric and composites of less active session participants can be used to minimize the bandwidth utilization between a media server and endpoints.
Thu, 27 Oct 2016 08:00:00 EDTA transmission terminal transmits video data and display data of a screen shared with another transmission terminal to the other transmission terminal via a predetermined relay apparatus. The transmission terminal includes a storage unit that stores relay apparatus information of the relay apparatus to which the transmission terminal transmits the video data; a receive unit that receives the display data from an external input apparatus connected to the transmission terminal; and a transmitting unit that transmits the display data received by the receive unit to the relay apparatus indicated by the relay apparatus information stored in the storage unit.
Thu, 27 Oct 2016 08:00:00 EDTA method for providing control and visualization of media. A first media stream having an initial resolution is received from a source computer; the first media stream is rescaled to generate a second media stream with a second resolution; the second media source stream is transmitted to a destination computer; after receiving instructions from the destination computer indicating a selection of the second media stream, the stream is rescaled in accordance with the received rescaling information, to generate a third media stream with a third resolution, which is transmitted to the destination computer.
Thu, 27 Oct 2016 08:00:00 EDTA user device within a communication architecture, the user device comprising: an image capture device configured to determine image data and intrinsic/extrinsic capture device data for the creation of a video channel defining a shared scene; a surface reconstruction entity configured to determine surface reconstruction data associated with the image data from the image capture device; a video channel configured to encode and packetize the image data and intrinsic/extrinsic capture device data; a surface reconstruction channel configured to encode and packetize the surface reconstruction data; a transmitter configured to transmit the video and surface reconstruction channel packets; and a bandwidth controller configured to control the bandwidth allocated to the video channel and the surface reconstruction channel.
Thu, 27 Oct 2016 08:00:00 EDTA perspective-correct communication window system and method for communicating between participants in an online meeting, where the participants are not in the same physical locations. Embodiments of the system and method provide an in-person communications experience by changing virtual viewpoint for the participants when they are viewing the online meeting. The participant sees a different perspective displayed on a monitor based on the location of the participant's eyes. Embodiments of the system and method include a capture and creation component that is used to capture visual data about each participant and create a realistic geometric proxy from the data. A scene geometry component is used to create a virtual scene geometry that mimics the arrangement of an in-person meeting. A virtual viewpoint component displays the changing virtual viewpoint to the viewer and can add perceived depth using motion parallax.
Thu, 27 Oct 2016 08:00:00 EDTIn order to solve the problems described above, the present invention employs a PSF restoring means and an image restoring means, implemented in software or hardware, for executing a plurality of iterations of real-number-based computations based on Bayse probability theory by using, as input information, a PSF luminance distribution identified according to a degree of degradation of TV video, a luminance distribution of a degraded image constituted of Y (luminance) components of the TV video, and an estimated luminance distribution of restored-image initial values. With these means, an estimated luminance distribution of a restored image having a maximum likelihood for the luminance distribution of the degraded image is obtained, and the estimated luminance distribution is substituted for the Y components of the TV video obtained by extracting the luminance distribution of the degraded image. Accordingly, TV video that approximates the pre-degradation state is provided substantially in real time.
Thu, 27 Oct 2016 08:00:00 EDTA solid-state image pickup device includes: a pixel portion in which a plurality of pixels are arrayed in matrix; and an Analog-Digital converter. The Analog-Digital converter converts pixel signals, which are generated in the pixel portion, from analog signals into digital signals. The Analog-Digital converter includes: a comparator; and a counter. The comparator compares the analog signals, each of which corresponds to each of the plurality of pixels, with a reference signal. The comparator includes: a first differential transistor to which the reference signal is input; a second differential transistor to which the analog signals are input; a first load transistor; a second load transistor; and a current source transistor. A fluctuation of a voltage between a gate and a source of the first differential transistor is suppressed. This fluctuation follows a voltage fluctuation of a node connected commonly to the second differential transistor and the second load transistor.
Thu, 27 Oct 2016 08:00:00 EDTAn imaging apparatus including a pixel, a current source, and a signal processing circuit. The pixel outputs signal charge, obtained by imaging, as a pixel signal. The current source is connected to a transmission path for the pixel signal and has a variable current. The signal processing circuit performs signal processing on a signal depending on an output signal to the transmission path and performs control so that a current of the current source is changed in accordance with the result of signal processing.
Thu, 27 Oct 2016 08:00:00 EDTIn a solid-state image capturing device, one or more vertical signal lines are disposed along one of columns of a pixel portion, and each of the vertical signal lines is divided into two parts between an upper region and a lower region of the pixel portion. Pixel signals output from a plurality of pixels of the one of the columns are read out to a plurality of column readout circuits through two or more parts of the vertical signal lines including the two parts of the one or more vertical signal lines disposed along the one of the columns A division position of one vertical signal line among the vertical signal lines disposed in the pixel portion is different from a division position of another vertical signal line among the vertical signal lines in a row direction.
Thu, 27 Oct 2016 08:00:00 EDTThe present invention provides an image sensor comprising a pixel array module composed of pixel groups, multiple switch control modules, a PFA, a pipelined ADC and a decode module. Each pixel group comprises multiple unit pixels which form at least one unit pixel. Each switch control module corresponds to a row of the pixel array module and includes a first transmission circuit and a second transmission circuit. The PGA process the data outputted by the first and second transmission circuits and the pipelined ADC performs A/D conversion to the data outputted by the PGA. The decode module controls the first and second transmission circuits of each row to alternately read and transmit the unit pixel data, and controls all the first and second transmission circuits to successively output the data of the unit pixels readout thereby to the PGA.
Thu, 27 Oct 2016 08:00:00 EDTReadout circuitry to readout an array of image sensor pixels includes readout units that include a plurality of analog-to-digital converters (“ADCs”), a plurality of blocks of Static Random-Access Memory (“SRAM”), and a plurality of blocks of Dynamic Random-Access Memory (“DRAM”). The plurality ADCs is coupled to readout analog image signals two-dimensional blocks of the array of image sensor pixels. The plurality of blocks of SRAM is coupled to receive digital image signals from the ADCs. The digital image signals are representative of the analog image signal readout from the two-dimensional block of pixels. The plurality of blocks of DRAM is coupled to the blocks of SRAM. Each block of SRAM is coupled to sequentially output the digital image signals to each of the blocks of DRAM. Each of the readout units are coupled to output the digital image signals as a plurality of Input/Output (“IO”) signals.
Thu, 27 Oct 2016 08:00:00 EDTAn image sensor may include an array of pixels arranged in rows and columns. Each pixel may include a floating diffusion node, a capacitor, a dual conversion gain (DCG) transistor having a gate terminal coupled to the floating diffusion, a source terminal, and a drain terminal coupled to the floating diffusion through the capacitor. Column readout circuitry may provide per-column control signals to the source terminal of the DCG transistor in the pixels of a selected row to place the pixels into a low conversion gain mode by turning the DCG transistor on and into a high conversion gain mode by turning the DCG transistor off. In this way, the readout circuitry may provide per-column conversion gains for each row without boosting DCG control signals to magnitudes greater than the pixel supply voltage, thereby reducing voltage stress on the pixel array and improving lifetime of the image sensor.
Thu, 27 Oct 2016 08:00:00 EDTA solid-state imaging device, in which a plurality of overlapping substrates are included and the plurality of substrates are electrically connected to each other, includes a pixel circuit, a first readout circuit configured to read out signals from the photoelectric conversion units, a signal-processing circuit configured to perform signal processing on signals read out from the photoelectric conversion units, an output circuit configured to output signals processed by the signal-processing circuit to the outside, a first wiring configured to be provided to correspond to each of the four circuits and to supply a first voltage to each of the four circuits, a second wiring configured to be provided to correspond to each of the four circuits and to supply a second voltage different from the first voltage to each of the circuits, and a capacitor that is electrically connected with the first wiring and the second wiring.
Thu, 27 Oct 2016 08:00:00 EDTAs a control signal used for resetting a photodiode, a control signal for selecting an adjacent pixel row is used. Accordingly, the number of kinds of used control signals decreases, and a decrease in the area of the photodiode is prevented. In addition, a period in which all of a plurality of control signals selecting an adjacent pixel row are active is provided by setting an active period of a control signal selecting a pixel row to a period longer than a period in which a signal of one pixel row is read. Therefore, a CDS operation is realized.
Thu, 27 Oct 2016 08:00:00 EDTAn imaging device with low power consumption is provided. The imaging device includes pixels and an A/D converter circuit. The pixels have a function of holding first imaging data and a function of obtaining differential data between the first imaging data and second imaging data. The A/D converter circuit includes a comparator circuit and a counter circuit. When the output of the pixels corresponds to the differential data, the supply of a clock signal to the counter circuit is stopped.
Thu, 27 Oct 2016 08:00:00 EDTAn imaging device capable of obtaining high-quality imaging data is provided. The imaging device includes a photoelectric conversion element, a first transistor, a second transistor, a third transistor, a fourth transistor, a fifth transistor, a sixth transistor, and a first capacitor. Variation in the threshold voltage of amplifier transistors can be compensated. Furthermore, the imaging device can have a difference detecting function for holding differential data between imaging data for an initial frame and imaging data for a current frame and outputting a signal corresponding to the differential data.
Thu, 27 Oct 2016 08:00:00 EDTAn imaging apparatus includes an imaging unit including a plurality of pixels each including a first photoelectric conversion nit, a second photoelectric conversion unit, and a microlens collecting light to the first photoelectric conversion unit and the second photoelectric conversion unit, and a first mixing unit configured to mix output signals from the first photoelectric conversion units of the plurality of pixels to generate a first signal and configured to output the first signal and a second signal based on the output signal from the first photoelectric conversion unit and an output signal from the second photoelectric conversion unit from each of the plurality of pixels, an image processing unit configured to generate an image based on the second signal, and a focus detection processing unit configured to perform focus detection based on the first signal and the second signal.
Thu, 27 Oct 2016 08:00:00 EDTAn apparatus is described that include a line buffer unit composed of a plurality of a line buffer interface units. Each line buffer interface unit is to handle one or more requests by a respective producer to store a respective line group in a memory and handle one or more requests by a respective consumer to fetch and provide the respective line group from memory. The line buffer unit has programmable storage space whose information establishes line group size so that different line group sizes for different image sizes are storable in memory.
Thu, 27 Oct 2016 08:00:00 EDTA solid-state image sensor has a pixel array and a processor configured to process a signal from the pixel array, the pixel array including a light-receiving pixel having first and second photoelectric converters and a light-shielded pixel having third and fourth photoelectric converters. The processor outputs (a) a pixel signal corresponding to charges of the first photoelectric converter, (b) an added pixel signal corresponding to a sum of charges of the first photoelectric converter and charges of the second photoelectric converter, and (c) an added reference signal corresponding to a sum of charges of the third photoelectric converter and charges of the fourth photoelectric converter, and does not output (d) a reference signal corresponding to charges of the third photoelectric converter and a reference signal corresponding to charges of the fourth photoelectric converter.
Thu, 27 Oct 2016 08:00:00 EDTA method and apparatus can determine lens shading correction for a multiple camera device with various fields of view. According to a possible embodiment, a flat-field image can be captured using a first camera having a first lens for a multiple camera device. A first lens shading correction ratio can be ascertained for the first camera. A second lens shading correction ratio can be determined for a second camera having a second lens for the multiple camera device. The second camera can have a different field of view from the first camera. The second lens shading correction ratio can be based on the first lens shading correction ratio and can be based on the flat-field image acquired by the first camera.
Thu, 27 Oct 2016 08:00:00 EDTVarious techniques are disclosed for providing a device attachment configured to releasably attach to and provide infrared imaging functionality to mobile phones or other portable electronic devices. The device attachment may include an infrared imagining module and a non-thermal imaging module that cooperate with one or more of a non-thermal imaging module in an attached device and a light source in the attached device for capturing and processing images.
Thu, 27 Oct 2016 08:00:00 EDTA system for controlling a pixel array sensor with independently controlled sub pixels is provided herein. The system includes at least one image detector, comprising an array of photo-sensitive pixels, each photo-sensitive pixel comprising at least one first type photo-sensitive sub pixel and plurality of second type photo-sensitive sub pixels; and a processor configured to control the at least first type controlled photo-sensitive sub pixel and the plurality of second type second type photo-sensitive sub pixels according to a specified exposure scheme, wherein the processor is further configured to control the at least one first type sub pixel independently of the specified exposure scheme, wherein the processor is further configured to selectively combine data coming from the at least one first type sub pixel with data coming from at least one of the plurality of second type sub pixels.
Thu, 27 Oct 2016 08:00:00 EDTA method is disclosed for processing at least a portion of an input digital image comprising rows of pixels extending in two mutually perpendicular directions over a 2D field. The method comprises defining a kernel for processing an image, the kernel comprising at least one row of contiguous elements of the same non-zero value (such rows being referred to herein as equal-valued kernel regions), the equal-valued kernel regions, if more than one, extending parallel to one another. For each pixel in at least selected parallel rows of pixels within the image portion, the cumulative sum of the pixel is calculated by adding a value of the pixel to the sum of all preceding pixel values in the same row of the image portion. The kernel is convolved with the image portion at successive kernel positions relative to the image portion such that each pixel in each selected row is a target pixel for a respective kernel position. For each kernel position, the convolving is performed, for each equal-valued kernel region, by calculating the difference between the cumulative sum of the pixel corresponding to the last element in the equal-valued kernel region and the cumulative sum of the pixel corresponding to the element immediately preceding the first element in the region, and summing the differences for all equal-valued kernel regions. The differences sum is scaled to provide a processed target pixel value.
Thu, 27 Oct 2016 08:00:00 EDTA filter realization method and apparatus of a camera application may include: obtaining a user-defined filter use instruction; extracting a program script according to the user-defined filter use instruction, where the program script is generated according to a user-defined photo parameter; and performing, by using the extracted program script, filter rendering on a photo obtained by triggering photographing in a camera application, to obtain a photo containing a filter effect. The apparatus includes: a filter use instruction obtaining module, configured to obtain a user-defined filter use instruction; a script extraction module, configured to extract a program script according to the user-defined filter use instruction, where the program script is generated according to a user-defined photo parameter; and a filter rendering module, configured to perform, by using the extracted program script, filter rendering on a photo obtained by triggering photographing in a camera application, to obtain a photo containing filter effect.
Thu, 27 Oct 2016 08:00:00 EDTThe imaging system has two lenses coupled together, wherein both lenses face different sides of the device body. Each lens has a separate image sensor; for example, the first lens is for the front side camera and the second lens is for the back side camera. A single actuator moves both lenses simultaneously. Depending on the imaging direction, either the front side image sensor or the back side image sensor is activated and an image is received through the corresponding lens system. The actuator generates autofocus or image stabilization functions for both cameras.