Tue, 08 Sep 2015 08:00:00 EDTSystems and methods configured to store images synthesized from light field image data and metadata describing the images in electronic files and render images using the stored image and the metadata in accordance with embodiments of the invention are disclosed. One embodiment includes a processor and memory containing an encoding application and light field image data, where the light field image data comprises a plurality of low resolution images of a scene captured from different viewpoints. In addition, the encoding application configures the processor to: synthesize a higher resolution image of the scene from a reference viewpoint using the low resolution images, where synthesizing the higher resolution image involves creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image; encode the higher resolution image; and create a light field image file including the encoded image and metadata including the depth map.
Tue, 28 Jul 2015 08:00:00 EDTSystems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.
Tue, 21 Jul 2015 08:00:00 EDTDual-energy absorptiometry is used to estimate visceral fat metrics and display results, preferably as related to normative data. The process involves deriving x-ray measurements for respective pixel positions related to a two-dimensional projection image of a body slice containing visceral fat as well as subcutaneous fat, at least some of the measurements being dual-energy x-ray measurements, processing the measurements to derive estimates of metrics related to the visceral fat in the slice, and using the resulting estimates. Processing the measurements includes an algorithm which places boundaries of regions, e.g., a large “abdominal” region and a smaller “abdominal cavity” region. Two boundaries of the “abdominal cavity” region are placed at positions associated with the left and right innermost extent of the abdominal muscle wall by identifying inflection of % Fat values. The regions are combined in an equation that is highly correlated with VAT measured by quantitative computed tomography in order to estimate VAT.
Tue, 14 Jul 2015 08:00:00 EDTA method and a system for determining patient motion in an image. The method includes obtaining an image based on image data generated by a scanner during a scan. The image includes at least three markers assumed to be in a rigid or semi-rigid configuration. Each of the at least three markers has actual measured positions on a detector panel of the scanner in a first dimension and a second dimension. The method further includes determining a reference three-dimensional position for each of the at least three markers and defining equations describing the relationship between the reference three-dimensional position and the actual measured positions of each of the at least three markers, geometric parameters of the scanner, and patient motion. The method finally includes solving numerically the equations to derive a six-component motion vector describing patient motion for the image that accounts for differences between the reference three-dimensional position of each of the at least three markers and the actual measured positions for each of the at least three markers.
Tue, 14 Jul 2015 08:00:00 EDTA method and system of rapidly detecting and mapping a plurality of marker points identified in a sequence of projection images to a physical marker placed on a patient or other scanned object. Embodiments of the invention allow a marker physical mapping module executable by an electronic processing unit to obtain a sequence of images based on image data generated by a scanner. Each image in the sequence of images represents an angle of rotation by the scanner and includes a plurality of marker points each having a U position and a V position. Expected U and V positions for the physical marker are calculated and a list of candidate marker points is created based on the expected positions. The images are processed and selected candidate marker points are added to a good point list.
Tue, 14 Jul 2015 08:00:00 EDTA method and system of determining a sub-pixel point position of a marker point in an image. Embodiments of the invention allow a marker point sub-pixel localization module executable by an electronic processing unit to obtain an image, including a marker point, and to derive both a background corrected marker point profile and a background and baseline corrected marker point profile. Additionally, embodiments of the invention allow defining of a sub-pixel position of the marker point as a center of a peak of the background corrected marker point profile. Thus, embodiments of the invention allow for rapidly detecting and localizing external markers placed on a patient in projection images.
Tue, 14 Jul 2015 08:00:00 EDTEmbodiments of the invention describe methods and apparatus for performing context-sensitive OCR. A device obtains an image using a camera coupled to the device. The device identifies a portion of the image comprising a graphical object. The device infers a context associated with the image and selects a group of graphical objects based on the context associated with the image. Improved OCR results are generated using the group of graphical objects. Input from various sensors including microphone, GPS, and camera, along with user inputs including voice, touch, and user usage patterns may be used in inferring the user context and selecting dictionaries that are most relevant to the inferred contexts.
Tue, 07 Jul 2015 08:00:00 EDTApproaches are described for addressing artifacts associated with iterative reconstruction of image data acquired using a cone-beam CT system. Such approaches include, but are not limited to, the use of asymmetric regularization during iterative reconstruction, the modulation of regularization strength for certain voxels, the modification of statistical weights, and/or the generation and use of synthesized data.
Tue, 07 Jul 2015 08:00:00 EDTAspects of the disclosure relate generally to determine specularity of an object. As an example an object or area of geometry may be selected. A set of images that include the area of geometry may be captured. This set of images may be filtered to remove images that do not show the area of geometry well, such as if the area is in a shadow or occluded by another object. A set of intensity values for the area are determined for each image. A set of angle values for each image is determined based on at least a direction of a camera that captured the particular image when the particular image was captured. The set of average intensities and the set of angle values are paired and fit to a curve. The specularity of the area may then be classified based on at least the fit.
Tue, 30 Jun 2015 08:00:00 EDTA measurement system is capable of accurately aligning the corresponding measurement points of a plurality of measurement targets to evaluate the measurement targets from measurement results. A measurement system includes a measuring instrument and a PC, the measuring instrument includes a spectroscopic unit that measures a measurement point of a measurement target and a camera that images surroundings in real-time. The PC displays an evaluation image of continuous image information, which is imaged and displayed by the camera on a display screen so as to be superimposed on a reference image of still image information, which has been imaged and stored in memory. By comparing the data obtained by measuring the measurement point in the evaluation image when both images overlap each other and the measurement data of the point in the reference image, it is possible to perform positioning easily and compare the measurement data.
Tue, 16 Jun 2015 08:00:00 EDTSystems and methods are disclosed for integrating imaging data from multiple sources to create a single, accurate model of a patient's anatomy. One method includes receiving a representation of a target object for modeling; determining one or more first anatomical parameters of the target anatomical object from at least one of one or more first images of the target anatomical object; determining one or more second anatomical parameters of the target anatomical object from at least one of one or more second images of the target anatomical object; updating the one or more first anatomical parameters based at least on the one or more second anatomical parameters; and generating a model of the target anatomical object based on the updated first anatomical parameters.
Tue, 09 Jun 2015 08:00:00 EDTA dynamically reconfigurable heterogeneous systolic array is configured to process a first image frame, and to generate image processing primatives from the image frame, and to store the primatives and the corresponding image frame in a memory store. A characteristic of the image frame is determined. Based on the characteristic, the array is reconfigured to process a following image frame.
Tue, 09 Jun 2015 08:00:00 EDTAn image processing apparatus can perform time-sequential weighting addition processing on a moving image constituted by a plurality of frames includes a filter unit configured to obtain an image by performing recursive filter processing on a frame image that precedes a present target frame, which is one of the plurality of frames constituting the moving image, a coefficient acquisition unit configured to acquire a weighting coefficient for the present target frame based on a noise statistic of a differential image between an image of the present target frame and the image obtained by the filter unit, and an addition unit configured to perform addition processing on the moving image constituted by the plurality of frames based on weighting coefficients acquired by the coefficient acquisition unit.
Tue, 02 Jun 2015 08:00:00 EDTA method is disclosed for reconstructing image data of an examination object from measurement data, wherein the measurement data were acquired in the course of a relative rotational movement between a radiation source of a computed tomography system and the examination object. First image data of the examination object are reconstructed from the measurement data. Scatter signals are calculated from the first image data using a scattered radiation model, wherein the scattered radiation model specifies an angle-dependent scatter distribution for a scatter point as a function of a line integral corresponding to an attenuation integral of a scattered beam from the scatter point to a specific detector element. The calculated scatter signals are used for correcting the measurement data, and second image data are reconstructed using the corrected measurement data.
Tue, 26 May 2015 08:00:00 EDTA method and device for controlling content that includes plural display pages in a sequence, the method including: displaying a current page included in the content; receiving a user input to or above a display screen of the display unit for changing from the current page to another page of the content; extracting fingerprint information from the user input; determining whether the content of the another page is or is not accessible based on the extracted fingerprint information; if all of the content of the another page is determined to be accessible based on the extracted fingerprint information, displaying the another page; and if any of the content of the another page is determined not to be accessible based on the extracted fingerprint information, displaying a page following the current page without displaying content of the another page that was determined not to be accessible.
Tue, 26 May 2015 08:00:00 EDTA screen data transfer device that includes a processor that executes a procedure. The procedure includes: (a) receiving screen data having a changed portion compressed by first compression processing, and detecting a high region where a changed frequency in the screen is a threshold value or greater; (b) detecting a processing load of the device itself; and (c) when the detected load is a threshold value or greater, and in cases in which the changed portion overlaps with the high region, assigning a route of the screen data to a first route in which second compression processing with a higher compression ratio than the first compression processing is executed by a compression section, and in cases in which the changed portion does not overlap with the high region, assigning the route to a second route in which the screen data bypasses the compression section.
Tue, 26 May 2015 08:00:00 EDTVarious embodiments enable a device to perform tasks such as processing an image to recognize and locate text in the image, and providing the recognized text an application executing on the device for performing a function (e.g., calling a number, opening an internet browser, etc.) associated with the recognized text. In at least one embodiment, processing the image includes substantially simultaneously or concurrently processing the image with at least two recognition engines, such as at least two optical character recognition (OCR) engines, running in a multithreaded mode. In at least one embodiment, the recognition engines can be tuned so that their respective processing speeds are roughly the same. Utilizing multiple recognition engines enables processing latency to be close to that of using only one recognition engine.
Tue, 26 May 2015 08:00:00 EDTAn image of an item is obtained in one or more computing devices. The item is identified from the image based at least in part on data derived from reference images that are each associated with one or more items. Catalog data corresponding to the item is associated with the image. The catalog data is added to a merchant catalog in an electronic marketplace.
Tue, 26 May 2015 08:00:00 EDTThe present invention is based on the finding that parameters including: a first set of parameters of a representation of a first portion of an original signal and a second set of parameters of a representation of a second portion of the original signal can be efficiently encoded when the parameters are arranged in a first sequence of tuples and a second sequence of tuples. The first sequence of tuples includes tuples of parameters having two parameters from a single portion of the original signal and the second sequence of tuples includes tuples of parameters having one parameter from the first portion and one parameter from the second portion of the original signal. A bit estimator estimates the number of necessary bits to encode the first and the second sequence of tuples. Only the sequence of tuples, which results in the lower number of bits, is encoded.
Tue, 26 May 2015 08:00:00 EDTAn automated process uses a local positioning system to acquire location (i.e., position and orientation) data for one or more movable target objects. In cases where the target objects have the capability to move under computer control, this automated process can use the measured location data to control the position and orientation of such target objects. The system leverages the measurement and image capture capability of the local positioning system, and integrates controllable marker lights, image processing, and coordinate transformation computation to provide tracking information for vehicle location control. The resulting system enables position and orientation tracking of objects in a reference coordinate system.
Tue, 26 May 2015 08:00:00 EDTDevice (10) for vehicle driving evaluation comprising: A means (11) to obtain at least one physical parameter whereby it possible at any time to determine the value of the speed and instantaneous acceleration of a traveling vehicle;A calculation and comparison unit (12) whereby it is possible, from said physical parameter, to calculate an effective parameter that depends on said instantaneous acceleration and to compare said effective parameter with a reference parameter;A driving evaluation unit (13), whereby it is possible to generate a vehicle driving energy score by measuring the variance between said effective parameter and said reference parameter. Corresponding vehicle driving evaluation process.
Tue, 26 May 2015 08:00:00 EDTOne or more derived versions of image content may be obtained by interpolating two or more source versions of the same image content. A derived version may be targeted for a class of displays that differs from classes of displays targeted by the source versions. Source images in a source version may have been color graded in a creative process by a content creator/colorist. Interpolation of the source versions may be performed with interpolation parameters having two or more different values in two or more different clusters in at least one of the source images. A normalized version may be used to allow efficient distribution of multiple versions of the same content to a variety of downstream media processing devices, and to preserve or restore image details otherwise lost in one or more of the source versions.
Tue, 26 May 2015 08:00:00 EDTEmbodiments of the present disclosure provide a method that comprises receiving a motion-interpolated pixel of an interpolated video frame, wherein the motion-interpolated pixel is based at least in part on a pair of anchor video frames. The method further comprises blending the motion-interpolated pixel with one or more anchor pixels of the pair of anchor video frames to produce a temporally filtered pixel, wherein the one or more anchor pixels correspond in position to the motion-interpolated pixel of the interpolated video frame. The method also comprises substituting the temporally filtered pixel for the motion-interpolated pixel in the interpolated video frame.
Tue, 26 May 2015 08:00:00 EDTA direction of regularity, which minimizes a directional energy computed from pixel values of consecutive first and second frames of an input video sequence, is respectively associated with each pixel of the first frame and with each pixel of the second frame. Another direction of regularity (vz), which minimizes a directional energy computed from pixel values of the first and second frames, is also associated with an output pixel (z) of a frame of an output video sequence, located in time between the first and second frames. For processing such output pixel, the respective minimized directional energies for the output pixel, at least one pixel (z′) of the first frame and at least one pixel (z″) of the second frame are compared to control an interpolation performed to determine a value of the output pixel. The interpolation uses pixel values from at least one of the first and second frames of the input video sequence depending on the comparison of the minimized directional energies.
Tue, 26 May 2015 08:00:00 EDTSystems, methods, and computer readable media to register images in real-time and that are capable of producing reliable registrations even when the number of high frequency image features is small. The disclosed techniques may also provide a quantitative measure of a registration's quality. The latter may be used to inform the user and/or to automatically determine when visual registration techniques may be less accurate than motion sensor-based approaches. When such a case is detected, an image capture device may be automatically switched from visual-based to sensor-based registration. Disclosed techniques quickly determine indicators of an image's overall composition (row and column projections) which may be used to determine the translation of a first image, relative to a second image. The translation so determined may be used to align/register the two images.
Tue, 26 May 2015 08:00:00 EDTAn input image (IMG1) may be converted to a lower-resolution output image (IMG2) by: determining a location (OP) of an output pixel (P2) with respect to said input image (IMG1), determining values of elements (E) of a filter array (FA1) such that non-zero values of the elements (E) of said filter array (FA1) approximate a paraboloid reference surface (REFS), wherein said reference surface (REFS) has a maximum at a base point (BP), and determining a value of said output pixel (P2) by performing a sum-of-products operation between non-zero values of said elements (E) and values of input pixels (P1) of said input image (IMG1) located at respective positions, wherein said filter array (FA1) is superimposed on said input image (IMG1) such that the location of said base point (BP) corresponds to the location of said output pixel (P2).
Tue, 26 May 2015 08:00:00 EDTA method and apparatus is provided for collecting data and generating synthesized data from the collected data. For example, a request for an image may be received from a requestor and at least one data capture device may be identified as capable of providing at least a portion of the requested image. A request may be sent to identified data capture devices to obtain an image corresponding to the requested image. Multiple images may be received from the data capture devices and may further be connected or stitched together to provide a panoramic, 3-dimensional image of requested subject matter.
Tue, 26 May 2015 08:00:00 EDTThere is described a method and a device for forming a panoramic image wherein it is decided to add a current image in a current panoramic image based on definitions of edges of the current image and the current panoramic image.
Tue, 26 May 2015 08:00:00 EDTAn image processing apparatus includes an image acquiring unit that acquires an image; an information acquiring unit that acquires image information indicative of a content of the image; and a correcting unit that corrects the image based on the image information such that some of warping of the image is left. The horizontal direction component of the warping may be completely or nearly completely eliminated while a predetermined portion of a vertical direction component of the warping is left, when the image information indicates the content of the image is a person. Alternatively, there is an analyzer which generates the image information indicating that the content of the image is a erson when the analyzing determines that the image contains two or more persons.
Tue, 26 May 2015 08:00:00 EDTAutomatic generation of a mosaic comprising a plurality of geospatial images. An embodiment of the automatic mosaic generation may include automated source image selection that includes comparison of source images to base layer image to determine radiometric similar source images. Additionally, an embodiment of an automatic cutline generator may be provided to automatically determine a cutline when merging two images such that radiometric differences between the images along the cutline are reduced. In this regard, less perceivable outlines may be provided. Further still, an embodiment of a radiometric normalization module may be provided that may determine radiometric adjustments to source images to match certain properties of the base layer image. In some embodiments, when processing source images, the source images may be downsampled during a portion of the processing to reduce computational overhead. Additionally, some highly parallel computations may be performed by a GPU to further enhance performance.
Tue, 26 May 2015 08:00:00 EDTProvided is a method and apparatus for deblurring a non-uniform motion blur in an input image, that may restore a clearer image by dividing a large scale input image into tiles corresponding to partial areas, selecting, among the divided tiles, an optimal tile for a partial area most suitable for estimating non-uniform motion blur information, and effectively removing an artifact in an outer portion of a tile through padding of each tile.
Tue, 26 May 2015 08:00:00 EDTVarious embodiments of methods and apparatus for motion deblurring are disclosed. In one embodiment, an estimate of a latent image of a blurred image at a current scale from an estimate of a latent image at a previous coarse scale is generated using an upsampling super-resolution function, and a blur kernel is estimated based on the estimate of the latent image and the blurred image; and are repeated from a course to fine scale. A final image estimate is generated. The generating the final image estimate includes performing a deconvolution of the latent image using the blur kernel and the blurred image.
Tue, 26 May 2015 08:00:00 EDTA method, system and computer software product for improving rate-distortion performance while remaining faithful to JPEG/MPEG syntax, involving joint optimization of Huffman tables, quantization step sizes and quantized coefficients of a JPEG/MPEG encoder. This involves finding the optimal coefficient indices in the form of (run, size) pairs. By employing an interative process including this search for optimal coefficient indices, joint improvement of run-length coding, Huffman coding and quantization table selection may be achieved. Additionally, the compression of quantized DC coefficients may also be improved using a trellis-structure.
Tue, 26 May 2015 08:00:00 EDTThere is provided a system, a computer program product, program storage device readable by machine, and a method of downsizing an input disjoint block level encoded image. According to examples of the presently disclosed subject matter, the method can include calculating a DCT downsize ratio for downsizing the input image in a DCT domain according to a target downsize ratio and according to a size of a DCT transform length associated with the input image; adapting an I-DCT according to the DCT domain downsize ratios; performing the adapted I-DCT; providing an intermediate image as output of a DCT domain process; and applying a pixel domain interpolation to the intermediate image according to dimensions of the intermediate image and according to dimensions of the target image.
Tue, 26 May 2015 08:00:00 EDTA method and apparatus prioritizing video information during coding and decoding. Video information is received and an element of the video information, such as a visual object, video object layer, video object plane or keyregion, is identified. A priority is assigned to the identified element and the video information is encoded into a bitstream, such as a visual bitstream encoded using the MPEG-4 standard, including an indication of the priority of the element. The priority information can then be used when decoding the bitstream to reconstruct the video information.
Tue, 26 May 2015 08:00:00 EDTAn image processing apparatus includes the following elements. A document-type determining unit determines what type of document a document is on the basis of read information obtained as a result of reading the document by using a document reader. A compression-format setting unit sets, on the basis of the type of document determined by the document-type determining unit, a compression format used for generating image data from the read information. A generator compresses the read information by using the compression format set by the compression-format setting unit so as to generate image data corresponding to the document.
Tue, 26 May 2015 08:00:00 EDTSystems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map to create a rendered image.
Tue, 26 May 2015 08:00:00 EDTAn apparatus and method for encoding and decoding an image are provided. The image decoding method includes decoding luma blocks according to a predetermined decoding mode of each of the luma blocks, and decoding chroma blocks according to the predetermined decoding mode of each of the luma blocks.
Tue, 26 May 2015 08:00:00 EDTAn apparatus and method for encoding and decoding an image are provided. The image decoding method includes decoding luma blocks according to a predetermined decoding mode of each of the luma blocks, and decoding chroma blocks according to the predetermined decoding mode of each of the luma blocks.
Tue, 26 May 2015 08:00:00 EDTAn image display apparatus that can shorten time for observing a series of images of an interior of a subject without hampering observation of a desired region of interest is provided. The image display apparatus according to the present invention is an image display apparatus for displaying the series of images of the interior of the subject picked up at time series, including an image extracting unit 15a that extracts images each having a feature of a desired region in the interior of the subject and that identifies the extracted images as images of the desired region among the series of images, and a frame rate controller 15b that sets a display frame rate for the identified images of the desired region to be different from a display frame rate for images of regions other than the desired region.
Tue, 26 May 2015 08:00:00 EDTEmbodiments of the present disclosure can include devices for storing and exchanging color space encoded images. The encoded images can store input data into high capacity multi-colored composite two-dimensional pictures having different symbols organized in specific order using sets in a color space. The encoding can include performing two-level error correction and generating frames based on the color space for formatting and calibrating the encoded images during decoding. The decoding can use the frames to perform color restoration and distortion correction. The decoding can be based on a pseudo-Euclidean distance between a distorted color and a color in a color calibration cells. In some embodiments, an encoded image can be further divided into sub-images during encoding for simplified distortion correction.
Tue, 26 May 2015 08:00:00 EDTThe invention pertains to a method for segmenting an image from a sequence of video images into a foreground and a background, the image being composed of pixels, the method includes assigning an initial foreground probability to each of the pixels; assigning a set of probability propagation coefficients to each of the pixels; and applying a global optimizer to the initial foreground probabilities of the pixels to classify each of the pixels as a foreground pixel or a background pixel, to obtain a definitive foreground map; wherein the global optimizer classifies each processed pixel in function of the initial foreground probability of the processed pixel and the initial foreground probability of neighboring pixels, the relative weight of the initial foreground probability of neighboring pixels being determined by the probability propagation coefficients of the neighboring pixels.
Tue, 26 May 2015 08:00:00 EDTOne or more systems and/or techniques are provided to identify objects comprised in a compound object without segmenting three-dimensional image data of the potential compound object. Two-dimensional projections of a potential compound object (e.g., Eigen projections) are examined to identify the presence of known objects. The projections are compared to signatures, such as morphological characteristics, of one or more known objects. If it is determined based upon the comparison that there is a high likelihood that the compound object comprises a known object, a portion of the projection is masked, and it is compared again to the signature to determine if this likelihood has increased. If it has, a sub-object of the compound object may be classified based upon characteristics of the known object (e.g., the compound object may be classified as a potential threat item if the known object is a threat item).
Tue, 26 May 2015 08:00:00 EDTAccording to one embodiment, an information processing apparatus includes an acquirement unit and a reporting unit. The acquirement unit is configured to acquire an image captured by a image capturing section. In a situation that a similarity representing a degree with which the image of an object captured by the image capturing section is similar to the reference image of each commodity meets a condition of determining a captured commodity as one commodity in the commodities corresponding to the reference image, the reporting unit is configured to report a situation that the captured commodity is determined as the commodity meeting the condition and corresponding to the reference image.
Tue, 26 May 2015 08:00:00 EDTIdentification of objects in images. All images are scanned for key-points and a descriptor is computed for each region. A large number of descriptor examples are clustered into a Vocabulary of Visual Words. An inverted file structure is extended to support clustering of matches in the pose space. It has a hit list for every visual word, which stores all occurrences of the word in all reference images. Every hit stores an identifier of the reference image where the key-point was detected and its scale and orientation. Recognition starts by assigning key-points from the query image to the closest visual words. Then, every pairing of the key-point and one of the hits from the list casts a vote into a pose accumulator corresponding to the reference image where the hit was found. Every pair key-point/hit predicts specific orientation and scale of the model represented by the reference image.
Tue, 26 May 2015 08:00:00 EDTAn image processing device that generates a pixel value of a pixel and interpolates the pixel with the pixel value, the image processing device including: a periodicity determining unit that determines whether an area including the pixel is a periodic area; a boundary determining unit that determines whether the pixel belongs to the periodic area or a non-periodic area; a first pixel value generating unit that generates a first pixel value; a second pixel value generating unit that generates a second pixel value; a control unit that determines whether the first pixel value generating unit is to be used or the second pixel value generating unit is to be used, based on determination results of the periodicity determining unit and the boundary determining unit; a pixel value inputting unit that inputs one of the first pixel value and the second pixel value to the pixel.
Tue, 26 May 2015 08:00:00 EDTAn image-based georeferencing system comprises an image receiver, an image identification processor, a reference feature determiner, and a feature locator. The image receiver is configured for receiving a first image for use in georeferencing. The image comprises digital image information. The system includes a communicative coupling to a georeferenced images database of images. The image identification processor is configured for identifying a second image from the georeferenced images database that correlates to the first image. The system includes a communicative coupling to a geographic location information system. The reference feature determiner is configured for determining a reference feature common to both the second image and the first image. The feature locator is configured for accessing the geographic information system to identify and obtain geographic location information related to the common reference feature.
Tue, 26 May 2015 08:00:00 EDTThe image signature extraction device includes an extraction unit and a generation unit. The extraction unit extracts region features from respective sub-regions in an image in accordance with a plurality of pairs of sub-regions in the image, the pairs of sub-regions including at least one pair of sub-regions in which both a combination of shapes of two sub-regions of the pair and a relative position between the two sub-regions of the pair differ from those of at least one of other pairs of sub-regions. The generation unit generates, based on the extracted region features of the respective sub-regions, an image signature to be used for identifying the image.
Tue, 26 May 2015 08:00:00 EDTA converted reference image generated from a reference image is searched for a first corresponding point corresponding to a first standard point included in a converted standard image generated from a standard image. Based on a position of the first corresponding point in the converted reference image, a second search standard point is determined on the reference image. Further, based on phase information on each frequency component regarding a second standard area including a second standard point corresponding to the first standard point of the standard image, and based on phase information on each frequency component regarding a second reference area including the second search standard point of the reference image, the reference image is searched for a second corresponding point corresponding to the second standard point. The information on a frequency component obtained in the computation on another stage is reused in the search on the two stages above.
Tue, 26 May 2015 08:00:00 EDTEmbodiments describe an image retrieval apparatus. The image retrieval apparatus includes an unlabelled image selector for selecting one or more unlabelled image(s) from an image database; and a main learner for training in each feedback round of the image retrieval, estimating relevance of images in the image database and a user's intention, and determining retrieval results, wherein the main learner makes use of the unlabelled image(s) selected by the unlabelled image selector in the estimation. In addition, the image retrieval apparatus may also include an active selector for selecting, in each feedback round and according to estimation results of the main learner, one or more unlabelled image(s) from the image database for the user to label.