Friday 10 November 2017

Cvthreshold Binary Options


Varias transformaciones de imagen Aplica un umbral adaptativo a una matriz. Src 8211 Imagen de canal único de 8 bits de origen. Dst 8211 Imagen de destino del mismo tamaño y del mismo tipo que src. MaxValue 8211 Valor no nulo asignado a los píxeles para los que se cumple la condición. Vea los detalles a continuación. AdaptiveMethod 8211 Algoritmo de umbral adaptativo a utilizar, ADAPTIVETHRESHMEANC o ADAPTIVETHRESHGAUSSIANC. Vea los detalles a continuación. ThresholdType 8211 Tipo de Umbral que debe ser THRESHBINARY o THRESHBINARYINV. BlockSize 8211 Tamaño de un vecindario de píxeles que se utiliza para calcular un valor de umbral para el píxel: 3, 5, 7 y así sucesivamente. C 8211 Constante sustraída de la media o de la media ponderada (ver los detalles a continuación). Normalmente, es positivo pero también puede ser cero o negativo. La función transforma una imagen de escala de grises en una imagen binaria de acuerdo con las fórmulas: imagen de entrada del src 8211: sin signo de 8 bits, 16 bits sin signo (CV16UC.) O punto flotante de precisión simple. Dst 8211 salida de la imagen del mismo tamaño y profundidad como src. Código de conversión del espacio de color del código 8211 (ver la descripción a continuación). DstCn 8211 número de canales en la imagen de destino si el parámetro es 0, el número de canales se deriva automáticamente de src y código. La función convierte una imagen de entrada de un espacio de color a otro. En el caso de una transformación a-desde el espacio de color RGB, el orden de los canales debe especificarse explícitamente (RGB o BGR). Tenga en cuenta que el formato de color predeterminado en OpenCV es a menudo referido como RGB, pero en realidad es BGR (los bytes se invierten). Así que el primer byte en una imagen en color estándar (24 bits) será un componente azul de 8 bits, el segundo byte será verde y el tercer byte será rojo. El cuarto, quinto y sexto bytes sería entonces el segundo píxel (azul, luego verde, luego rojo), y así sucesivamente. Los rangos convencionales para los valores de canal R, G y B son: 0 a 255 para imágenes CV8U 0 a 65535 para imágenes CV16U 0 a 1 para imágenes CV32F En el caso de transformaciones lineales, la gama no importa. Pero en el caso de una transformación no lineal, una imagen RGB de entrada debería normalizarse al rango de valores adecuado para obtener los resultados correctos, por ejemplo, para la transformación RGB Luv. Por ejemplo, si tiene una imagen de punto flotante de 32 bits convertida directamente de una imagen de 8 bits sin escalar, tendrá el rango de valores 0..255 en lugar de 0..1 asumido por la función. Por lo tanto, antes de llamar a cvtColor. Primero debe escalar la imagen: Si utiliza cvtColor con imágenes de 8 bits, la conversión tendrá alguna información perdida. Para muchas aplicaciones, esto no se notará, pero se recomienda usar imágenes de 32 bits en aplicaciones que necesiten el rango completo de colores o que convierten una imagen antes de una operación y luego convertir de nuevo. Si la conversión agrega el canal alfa, su valor se ajustará al máximo del rango de canales correspondiente: 255 para CV8U. 65535 para CV16U. 1 para CV32F. La función puede realizar las siguientes transformaciones: RGB GRAY (CVBGR2GRAY, CVRGB2GRAY, CVGRAY2BGR, CVGRAY2RGB) Transformaciones en el espacio RGB como añadir / quitar el canal alfa, invirtiendo el orden del canal, conversión a / desde el color RGB de 16 bits (R5: G6: B5 o R5: G5: B5), así como la conversión a / desde la escala de grises usando: (no soportado actualmente) L, u, y v se dejan como están Las fórmulas anteriores para convertir RGB a / de varios espacios de color se han tomado de Varias fuentes en la web, principalmente del sitio Charles Poynton. poynton / ColorFAQ. html Bayer RGB (CVBayerBG2BGR, CVBayerGB2BGR, CVBayerRG2BGR, CVBayerGR2BGR, CVBayerBG2RGB, CVBayerGB2RGB, CVBayerRG2RGB, CVBayerGR2RGB). El patrón Bayer es ampliamente utilizado en cámaras CCD y CMOS. Le permite obtener imágenes en color desde un solo plano donde se intercalan los píxeles R, G y B (sensores de un componente en particular): Los componentes RGB de salida de un píxel se interpolan desde 1, 2 o 4 vecinos del Pixel que tiene el mismo color. Hay varias modificaciones del patrón anterior que se pueden lograr desplazando el patrón un píxel a la izquierda y / o un píxel hacia arriba. Las dos letras y en las constantes de conversión CVBayer 2BGR y CVBayer 2RGB indican el tipo de patrón particular. Estos son componentes de la segunda fila, segunda y tercera columnas, respectivamente. Por ejemplo, el patrón anterior tiene un tipo muy popular 8220BG8221. DistanceTransform Calcula la distancia al píxel cero más cercano para cada píxel de la imagen de origen. Src 8211 de 8 bits, de un solo canal (binario) fuente de imagen. Dst 8211 Salida de imagen con distancias calculadas. Es una imagen de un solo canal de punto flotante de 32 bits del mismo tamaño que src. DistanceType 8211 Tipo de distancia. Puede ser CVDISTL1, CVDISTL2. O CVDISTC. MaskSize 8211 Tamaño de la distancia transformar la máscara. Puede ser 3, 5 o CVDISTMASKPRECISE (la última opción sólo es compatible con la primera función). En el caso del tipo de distancia CVDISTL1 o CVDISTC, el parámetro se fuerza a 3 porque una máscara da el mismo resultado que una apertura mayor. Etiquetas 8211 Serie de etiquetas de salida opcional 2D (el diagrama de Voronoi discreto). Tiene el tipo CV32SC1 y el mismo tamaño que src. Vea los detalles a continuación. LabelType 8211 Tipo de la matriz de etiquetas a construir. Si labelTypeDISTLABELCCOMP, a continuación, cada componente conectado de ceros en src (así como todos los píxeles no cero más cercano al componente conectado) se le asignará la misma etiqueta. Si labelTypeDISTLABELPIXEL entonces cada píxel cero (y todos los píxeles no cero más cercanos a él) obtiene su propia etiqueta. Las funciones distanceTransform calculan la distancia aproximada o precisa desde cada píxel de la imagen binaria hasta el píxel cero más próximo. Para píxeles de imagen cero, la distancia obviamente será cero. Cuando maskSize CVDISTMASKPRECISE y distanceType CVDISTL2. La función ejecuta el algoritmo descrito en Felzenszwalb04. Este algoritmo está paralelizado con la biblioteca TBB. En otros casos, se utiliza el algoritmo Borgefors86. Esto significa que para un píxel, la función encuentra la ruta más corta al píxel cero más cercano, que consiste en desplazamientos básicos: horizontal, vertical, diagonal o knight8217s mover (lo último disponible para una máscara). La distancia total se calcula como una suma de estas distancias básicas. Dado que la función de distancia debe ser simétrica, todos los desplazamientos horizontal y vertical deben tener el mismo costo (indicado como a), todos los desplazamientos diagonales deben tener el mismo costo (indicado como b), y todos los movimientos del caballero deben tener el mismo costo (Denotado como c). Para los tipos CVDISTC y CVDISTL1, la distancia se calcula con precisión, mientras que para CVDISTL2 (distancia euclidiana) la distancia se puede calcular sólo con un error relativo (una máscara da resultados más precisos). Para a, b. y C . OpenCV utiliza los valores sugeridos en el documento original: Típicamente, para una estimación de distancia rápida y aproximada CVDISTL2. Se utiliza una máscara. Para una estimación de distancia más precisa CVDISTL2. Se utiliza una máscara o el algoritmo preciso. Obsérvese que tanto el algoritmo preciso como el aproximado son lineales en el número de píxeles. La segunda variante de la función no sólo calcula la distancia mínima para cada píxel, sino que también identifica el componente conectado más cercano que consiste en cero píxeles (labelTypeDISTLABELCCOMP) o el píxel cero más cercano (labelTypeDISTLABELPIXEL). El índice del componente / píxel se almacena en. Cuando labelTypeDISTLABELCCOMP. La función encuentra automáticamente componentes conectados de cero píxeles en la imagen de entrada y los marca con etiquetas distintas. Cuando labelTypeDISTLABELCCOMP. La función escanea a través de la imagen de entrada y marca todos los píxeles cero con etiquetas distintas. En este modo, la complejidad sigue siendo lineal. Es decir, la función proporciona una manera muy rápida de calcular el diagrama de Voronoi para una imagen binaria. En la actualidad, la segunda variante sólo puede utilizar el algoritmo de transformación de distancia aproximada, es decir, maskSizeCVDISTMASKPRECISE aún no está soportado. Un ejemplo sobre el uso de la transformada de distancia se puede encontrar en opencvsourcecode / samples / cpp / distrans. cpp (Python) Un ejemplo sobre el uso de la transformada de distancia se puede encontrar en opencvsource / samples / python2 / distrans. py floodFill Llena un componente conectado con el Dado el color. Image 8211 Imagen de entrada / salida de 1 ó 3 canales, 8 bits o de punto flotante. Se modifica por la función a menos que el indicador FLOODFILLMASKONLY se establezca en la segunda variante de la función. Vea los detalles a continuación. Mask 8211 Máscara de operación que debe ser una imagen de 8 bits de un solo canal, 2 píxeles más ancha y 2 píxeles más alta que la imagen. Dado que este es un parámetro de entrada y salida, debe asumir la responsabilidad de inicializarlo. El relleno de inundación no puede atravesar píxeles que no sean cero en la máscara de entrada. Por ejemplo, una salida de detector de borde puede utilizarse como máscara para detener el llenado en los bordes. En la salida, los píxeles en la máscara correspondiente a los píxeles rellenos en la imagen se establecen en 1 o en el valor especificado en banderas como se describe a continuación. Por lo tanto, es posible utilizar la misma máscara en múltiples llamadas a la función para asegurarse de que las áreas llenas no se superponen. SeedPoint 8211 Punto de partida. NewVal 8211 Nuevo valor de los píxeles del dominio repintado. LoDiff 8211 Diferencia de brillo / color más baja entre el píxel actualmente observado y uno de sus vecinos que pertenecen al componente, o un píxel de semilla que se añade al componente. UpDiff 8211 Diferencia máxima de brillo / color superior entre el píxel actualmente observado y uno de sus vecinos pertenecientes al componente, o un píxel de semilla que se añade al componente. Rect 8211 Parámetro de salida opcional establecido por la función en el rectángulo delimitador mínimo del dominio repintado. Banderas 8211Las banderas de la operación. Los primeros 8 bits contienen un valor de conectividad. El valor predeterminado de 4 significa que sólo se consideran los cuatro píxeles vecinos más cercanos (aquellos que comparten una arista). Un valor de conectividad de 8 significa que los ocho píxeles vecinos más cercanos (aquellos que comparten una esquina) serán considerados. Los siguientes 8 bits (8-16) contienen un valor entre 1 y 255 para llenar la máscara (el valor predeterminado es 1). Por ejemplo, 4 (255 ltlt 8) considerará a 4 vecinos más cercanos y rellenará la máscara con un valor de 255. Las siguientes opciones adicionales ocupan bits más altos y por lo tanto pueden combinarse más con los valores de conectividad y relleno de máscara usando bits o ): FLOODFILLFIXEDRANGE Si se establece, se considera la diferencia entre el píxel actual y el píxel de semilla. De lo contrario, se considera la diferencia entre los píxeles vecinos (es decir, el rango flotante). FLOODFILLMASKONLY Si se establece, la función no cambia la imagen (newVal se ignora) y sólo llena la máscara con el valor especificado en los bits 8-16 de flags como se describe anteriormente. Esta opción sólo tiene sentido en las variantes de función que tienen el parámetro de máscara. Las funciones floodFill rellenan un componente conectado a partir del punto de semilla con el color especificado. La conectividad está determinada por la proximidad de color / brillo de los píxeles vecinos. Se considera que el píxel pertenece al dominio repintado si: en el caso de una imagen en escala de grises y rango flotante en el caso de una imagen en color y rango fijo donde es el valor de uno de los vecinos de píxeles que ya se sabe que pertenece al componente. Es decir, para ser añadido al componente conectado, un color / brillo del píxel debe estar lo suficientemente cerca de: Color / brillo de uno de sus vecinos que ya pertenecen al componente conectado en caso de un rango flotante. Color / brillo del punto de semilla en caso de un rango fijo. Utilice estas funciones para marcar un componente conectado con el color especificado en el lugar, o construir una máscara y luego extraer el contorno, o copiar la región a otra imagen, y así sucesivamente. Un ejemplo usando la técnica FloodFill se puede encontrar en opencvsourcecode / samples / cpp / ffilldemo. cpp (Python) Un ejemplo usando la técnica FloodFill se puede encontrar en opencvsourcecode / samples / python2 / floodfill. cpp integral Calcula la integral de una imagen. Imagen 8211 Entrada de 8 bits de 3 canales de imagen. Marcadores 8211 Entrada / salida Imagen de un solo canal de 32 bits (mapa) de marcadores. Debe tener el mismo tamaño que la imagen. La función implementa una de las variantes del algoritmo de segmentación basado en marcadores no paramétricos, descrito en Meyer92. Antes de pasar la imagen a la función, tiene que bosquejar aproximadamente las regiones deseadas en los marcadores de imagen con índices positivos (gt0). Por lo tanto, cada región se representa como uno o más componentes conectados con los valores de píxel 1, 2, 3 y así sucesivamente. Dichos marcadores pueden ser recuperados de una máscara binaria usando findContours () y drawContours () (ver el demo watershed. cpp). Los marcadores son 8220seeds8221 de las regiones de imagen futuras. Todos los otros píxeles en los marcadores. Cuya relación con las regiones delimitadas no se conoce y debe ser definida por el algoritmo, debe establecerse en 08217s. En la salida de función, cada píxel de los marcadores se establece en un valor de los componentes 8220seed8221 o -1 en los límites entre las regiones. La demostración visual y el ejemplo de uso de la función se pueden encontrar en el directorio de muestras de OpenCV (ver la demo de watershed. cpp). Cualquier dos componentes conectados con el vecino no están necesariamente separados por un límite de cuenca (por ejemplo, -18217s píxeles), pueden tocarse entre sí en la imagen de marcador inicial pasada a la función. Se puede encontrar un ejemplo usando el algoritmo de cuencas en opencvsourcecode / samples / cpp / watershed. cpp (Python) Se puede encontrar un ejemplo usando el algoritmo de cuencas en opencvsourcecode / samples / python2 / watershed. py grabCut Ejecuta el algoritmo GrabCut. Máscara de un solo canal de 8 bits de entrada / salida. La máscara es inicializada por la función cuando el modo está ajustado a GCINITWITHRECT. Sus elementos pueden tener uno de los valores siguientes: GCBGD define unos píxeles de fondo obvios. GCFGD define un píxel de primer plano obvio (objeto). GCPRBGD define un posible píxel de fondo. GCPRFGD define un posible píxel de primer plano. Rect 8211 ROI que contiene un objeto segmentado. Los píxeles que se encuentran fuera del ROI están marcados como 8220obvious background8221. El parámetro sólo se utiliza cuando modeGCINITWITHRECT. BgdModel 8211 Matriz temporal para el modelo de fondo. No lo modifique mientras está procesando la misma imagen. FgdModel 8211 Arrays temporales para el modelo de primer plano. No lo modifique mientras está procesando la misma imagen. IterCount 8211 Número de iteraciones que el algoritmo debe hacer antes de devolver el resultado. Tenga en cuenta que el resultado puede refinarse con otras llamadas con modeGCINITWITHMASK o modeGCEVAL. Modo 8211Todo de operación que podría ser uno de los siguientes: GCINITWITHRECT La función inicializa el estado y la máscara utilizando el rectángulo proporcionado. Después ejecuta iterCount iteraciones del algoritmo. GCINITWITHMASK La función inicializa el estado utilizando la máscara suministrada. Tenga en cuenta que GCINITWITHRECT y GCINITWITHMASK pueden combinarse. Entonces, todos los píxeles fuera del ROI se inicializan automáticamente con GCBGD. GCEVAL El valor significa que el algoritmo debe reanudarse. La función implementa el algoritmo de segmentación de imagen GrabCut. Vea la muestra grabcut. cpp para aprender a usar la función. DB: 2.97: Mostrar una imagen en Opencv ks Tengo el OpenCV O39Reilly manual, y estoy probando el código de ejemplo en la sección quotDisplay a Picture. quot La imagen I Desea mostrar es un formato jpeg almacenado en el escritorio. He aquí una copia del código: // Sample Project. cpp. Define el punto de entrada para la aplicación de consola. // include quotstdafx. hquot include quothighgui. hquot // El encabezado de la GUI incluye quotcv. hquot // encabezado principal OpenCV int main (int argc, argv char) IplImage img cvLoadImage (quotp-pod-jpeg. jpgquot, 1) cvNamedWindow (quotExample 1quot CvShowImage (quotExample 1quot, img) cvWaitKey (0) cvReleaseImage (img) cvDestroyWindow (quotExample 1quot) El código compilado con éxito, pero cuando entré en modo de depuración, el terminal de línea de comandos aparece con una ventana que tiene un fondo gris . La imagen no se muestra. ¿Qué estoy haciendo mal citando - Gennady Fedorov (Intel) Por favor, verifique si el puntero img no es nulo después de devolver por cvLoadImage (..) --Gennady DB: 2.87: Bibliotecas de vídeo Opencv Para Zynq 1j uso una placa zynq (zc702) Y la última versión del software Xilinx (ise 14.6 y vivado 2013.2). Me gustaría usar la biblioteca de vídeo OpenCV sólo en el ARMprocessor (parte PS). He hecho un sistema con la parte apenas de PS y la exportación eso al SDK. En SDK, he utilizado el código siguiente de C, que se ha adaptado de xapp1167. El código me da anerror porque no puede encontrar la biblioteca (hlsopencv. h). También, intenté agregar los archivos de la biblioteca de esta trayectoria (C: XilinxVivadoHLS2013.2include) pero me da varios errores que arecausedby (xhlsutils. h). Tengo dos preguntas: ¿Dónde puedo encontrar la biblioteca correcta para instalarlo en Windows? ¿Cómo puedo instalar el software thelibraryforSDK? Gracias por su tiempo y ayuda int main (int argc, char argv) IplImage srccvLoadImage (quottest1080p. bmpquot) IplImage dst cvCreateImage (cvGetSize CvScale (src, dst, 2,0) cvScale (src, dst, 2,0) cvEscala (src, dst, 1,0) cvSubS (dst, cvScalar (100,100,100), src) (Src, dst) char tempbuf2000sprintf (tempbuf, quotdiff --brief - ws squot, quotresult1080p. bmpquot, quotresult1080pgolden. bmpquot) int ret sistema (tempbuf) if (ret 0) else return ret DB: 2.83: Umc y Opencv 8a Tratando de decodificar un archivo AVI usando UMC. Fin de mi código quería ver los marcos de vídeo mediante la utilización de la biblioteca highgui de opencv. Sin embargo, todo lo que puedo ver no es más que una ventana gris. Hice los pasos que se dan en el documento UMC utilizando códigos de ejemplo. Aquí está mi código: include incluyen quotippdefs. hquot incluir quotipps. hquot incluir quotippi. hquot incluir quotippj. hquot incluir quotippcc. hquot incluir quotvmtime. hquot incluir quotumcdefs. hquot incluir quotumcstructures. hquot incluir quotumcdatareader. hquot incluir quotumcsplitter. hquot incluir quotumcvideodecoder. Hquot include quotumcvideodata. hquot include quotumcfileader. hquot include quotumcfioreader. hquot include quotumcavisplitter. hquot include quotcv. hquot include quothighgui. hquot include quotippimage. hquot using namespace UMC UMC :: Status InitDataReader (DataReader datareader, nombre de archivo char) FileReaderParams readerparams (CvSize (anchura, altura), bitDepth, canales) // iplCreateImageHeader (1, 2, 3, 4, 5, 6, 8, 8, 8, 0, IPLDEPTH32S, quotARGBquot, quotARGBquot, IPLDATAORDERPIXEL, IPLORIGINTL, IPLALIGN4BYTES, 768, 576, NULL, NULL, NULL, NULL) imgTmp-imageData (char) malloc (imgTmp-heightimgTmp-widthStep) // IMAGEWIDTHIMAGEHEIGHTchannels4 return imgTmp UMC :: Estado InitAviSplitter (Splitter avisplitter, DataReader datareader) SplitterParams splitterparams splitterparams. mlFlags UMC :: VIDEOSPLITTER splitterparams. mpDataReader datareader UMC :: Estado InitMPEG4VDecoder (VideoDecoder videodecoder, VideoStreamInfo videoinfo) si (videoinfo-streamtype UMC :: MPEG4VIDEO) devuelve UMCERRFAILED decoderparams. info videoinfo Decoderparams. lFlags 0 UMC :: Status DecodeVideo (archivo de entrada de char) Ipp32u pista0 FIOReader src AVISplitter avisp SplitterInfo splinfo VideoStreamInfo videoinfo MPEG4VideoDecoder dec MediaData en VideoData de salida IplImage img InitializeImage (768, 576, IPLDEPTH8U, UmcRes UMCOK) return umcRes umcRes InitAviSplitter (avisp, src) if (umcRes UMCOK) devuelve umcRes umcRes avispl. GetInfo (splinfo) si (umcRes UMCOK) devuelve umcRes para (track0 pista track splinfo-mnOfTracks) if (splinfo-mppTrackInfotrack-mType UMC :: TRACKMPEG4V) break if (pista splinfo-mnOfTracks) return UMCERRINVALIDSTREAM videoinfo (VideoStreamInfo) splinfo-mppTrackInfotrack-mpStreamInfo videoinfo-colorformat UMC :: YUY2 umcRes InitMPEG4VDecoder (dec, videoinfo) if (umcRes UMCOK) retorno umcRes umcRes out. Init (videoinfo - clipinfo. width, videoinfo-clipinfo. height, videoinfo-colorformat) if (umcRes UMCOK) return umcRes umcRes out. Alloc () if (umcRes UMCOK) return umcRes mientras (UMCOKumcRes UMCERRNOTENOUGHDATA umcRes) while (UMCERRNOTENOUGHDATA (umcRes avispl. GetNextData (UncRes UMCOK) rompe memcpy (img-imageData, out. mpbAllocated, out. GetMappingSize ()) cvShowImage (quotDisplayquot) , Img) cvWaitKey (0) return umcRes int principal () ippStaticInit () char inp quotC: Documentos y SettingscagriMy DocumentsVisual Studio 2005ProjectsDiskpb. aviquot Puse mpbAllocated variable de VideoData Class a public. My código puede decodificar el archivo correctamente. En este punto mi pregunta es: 1- ¿Es mi problema en la obtención de marcos de decodificador 2-O es mi problema está en la transferencia de datos desde fuera (VideoData) yo img (IplImage) Gracias de antemano. No está claro dónde está su problema. ¿Has intentado reproducir ese archivo AVI en particular con la aplicación simpleplayer (es parte de la muestra de IPP de audio-video-codecs) Por favor eche un vistazo a la documentación de UMC para encontrar cómo acceder a los datos en el objeto MediaData 2.71 DB: 2.71: C Templates En Opencv Iplimages, algunos aconsejan por favor 9m Hola a todos, Im trabajando en un doctorado de ingeniería que implica la visión por ordenador. Mi base de código involucra mucha manipulación de imágenes, simplifico las cosas de rutina usando OpenCVs IPLImages, en lugar de mis propias clases de imagen. La cosa es, muchas de mis clases necesitan ser flexibles en el tipo de datos que se utiliza. IPLImages puede ser de una variedad de tipos, utilizando plantillas, así, pero la desviación de eso es que no estoy seguro de cómo utilizar las plantillas para mis propias clases, y han recurrido a plantillas de funciones en su lugar. Aquí está lo que estoy usando actualmente. Class foo private: templateclass T void bar2 (Imagen IplImage) public: void bar (IplImage Image) templateclass T foo :: bar2 (Imagen IplImage) // el código actual va aquí, usando el tipo T foo :: bar (IplImage Image) // Compruebe el tipo usando la profundidad de la imagen, ejecute la plantilla adecuada Gracias kakTuZ para su ayuda de continuación, eso es en realidad casi lo que estoy haciendo actualmente (sin el virtual, aunque, pseudo-código en mi OP), y funciona. Solo estaba buscando una solución que no implicara que tuviera que definir explícitamente las profundidades de imagen permitidas permitidas. Lo que significaría que a medida que expande mi algoritmo a diferentes tipos de datos (incluyendo mis propios definidos) no tendría que volver a cada uno (actualmente alrededor de 10, y espero otros 20 o así) clase que he escrito utilizando esta técnica . RELEVANCY SCORE 2.69 DB: 2.69: Liberar problemas de recursos sobre el uso de Canny algoritmo de Opencv 7k Hola, ¿Alguien ha utilizado el algoritmo Canny de OpenCV Tengo problemas sobre la liberación de recursos. Lo siento por mi pobre inglés) Follwing es el código C / p Envuelva el algoritmo de Canny para la detección de bordes de OpenCV. / P param imagen gris imagen anchura del parámetro anchura de la imagen gris (condición: anchura 4 0) parámetro altura altura de la imagen gris parámetro umbral1 el primer umbral umbral param2 el segundo umbral retorno aristas imagen / JNIEXPORT jbyteArray JNICALL JavaCannyNIcannyDetector (JNIEnv env, jclass cls, jbyteArray image , Anchura jint, altura jint, umbral jdouble1, umbral jdouble2) // ----------------------------------- -------------------------------- // (1) cargar jbyteArray en IplImage: jbyteArray - char - IplImage // - -------------------------------------------------- ---------------- CvSize cvsize char cimage (char) env - GetByteArrayElements (imagen, NULL) // cimage debe ser lanzado IplImage cvimage cvCreateImage (cvsize, 8, 1) // cvimage Debe ser lanzado cvSetData (cvimage, cimage, anchura) // ------------------------------------- ----------------------------- // (3) recuperar IplImage en jbyteArray: IplImage - char - jbyteArray // ---- -------------------------------------------------- ------------ char cedges // cedges debe ser lanzado int paso cvGetRawData (cvedges, (uchar) cedges, paso, cvsize) int tamaño ancho altura jbyteArray bordes env - NewByteArray (tamaño) env - SetByteArrayRegion (Edge, 0, size, (signed char) cedges) // NOTA: la siguiente instrucción lleva a EXCEPTIONACCESSVIOLATION (0xc0000005) error // env - ReleaseByteArrayElements Error de desasignación // cvReleaseImage (cvimage) Hola, ¿alguien ha utilizado el algoritmo Canny de OpenCV? Tengo problemas para liberar recursos. Lo siento por mi pobre inglés) Follwing es el código C / p Envuelva el algoritmo de Canny para la detección de bordes de OpenCV. / P param imagen gris imagen anchura del parámetro anchura de la imagen gris (condición: anchura 4 0) parámetro altura altura de la imagen gris parámetro umbral1 el primer umbral umbral param2 el segundo umbral retorno aristas imagen / JNIEXPORT jbyteArray JNICALL JavaCannyNIcannyDetector (JNIEnv env, jclass cls, jbyteArray image , Anchura jint, altura jint, umbral jdouble1, umbral jdouble2) // ----------------------------------- -------------------------------- // (1) cargar jbyteArray en IplImage: jbyteArray - char - IplImage // - -------------------------------------------------- ---------------- CvSize cvsize char cimage (char) env - GetByteArrayElements (imagen, NULL) // cimage debe ser lanzado IplImage cvimage cvCreateImage (cvsize, 8, 1) // cvimage Debe ser lanzado cvSetData (cvimage, cimage, anchura) // ------------------------------------- ----------------------------- // (3) recuperar IplImage en jbyteArray: IplImage - char - jbyteArray // ---- -------------------------------------------------- ------------ char cedges // cedges debe ser lanzado int paso cvGetRawData (cvedges, (uchar) cedges, paso, cvsize) int tamaño ancho altura jbyteArray bordes env - NewByteArray (tamaño) env - SetByteArrayRegion (Edge, 0, size, (signed char) cedges) // NOTA: la siguiente instrucción lleva a EXCEPTIONACCESSVIOLATION (0xc0000005) error // env - ReleaseByteArrayElements Error de desasignación // cvReleaseImage (cvimage) DB: 2.68: Acceder a la memoria intermedia Jpegxr después de la codificación j8 He escrito un programa para codificar con éxito buffers RGB a archivos JpegXR. I39m utilizando los códigos de muestra uic proporcionados en quotwipp-samplesp7.0.7.064ipp-samplesimage-codecsuicsrcapplicationuictranscoderconquot. Sin embargo, ahora, necesito tener acceso al búfer de la imagen de JpegXR (con los cabeceras y todo) después de la codificación, y no los escribo para archivar en absoluto. ¿Es este JpegXR imagen búfer quotBaseStreamOutput outquot en quotjpegxr. cppquot ¿Cómo puedo evitar la escritura a archivos JpegXR quotjpegxr. cppquot // encontrado en código de ejemplo IMERROR SaveImageJPEGXR (// Algún código. Si (ExcStatusOk encoder. WriteHeader ()) devuelve IEWHEADER si (ExcStatusOk Encoder. WriteData ()) devuelve IEWDATA si (ExcStatusOk encoder. FreeData ()) devuelve IERESET // Un poco más de código. En el código de ejemplo UMC, hay una salida de buffer de memoria (ipp-samplesimage-codecsuicsrciouicio) Buffer de salida, que no sea la salida de archivo. Chatchai - el UVmap no es algo que se mostraría como lo está haciendo arriba. Los valores en el UVmap se utilizan para traducir las coordenadas de profundidad en xey en las coordenadas RGB xey. Ejemplo de su uso en las muestras de PerC SDK. Usted está captando los datos correctamente y ahora sólo tiene que usarlo. PREFERENCIA SCORE 2.60 DB: 2.60: Lnk2019 Y Lnk2001 a3 I39m hacer una aplicación utilizando OPENCV Bibliotecas. Pero creo que este problema no es En relación con OPENCV. Al principio mi mensaje de error es como abajo. 1 ------ Construcción iniciada: Proyecto: 001exLoadImage, Configuración: Depuración Win32 ------ 1Compiling. 1main. cpp 1c: opencv2.2includeopencv2flanndist. h. Warning C4819: El archivo contiene un carácter que no se puede representar en la página de códigos actual (949). Guarde el archivo en formato Unicode para evitar la pérdida de datos 1c: opencv2.2includeopencv2flanndist. h. Warning C4819: El archivo contiene un carácter que no se puede representar en la página de códigos actual (949). Guarde el archivo en formato Unicode para evitar la pérdida de datos 1ThresholdingWindows. cpp 1c: opencv2.2includeopencv2flanndist. h. Warning C4819: El archivo contiene un carácter que no se puede representar en la página de códigos actual (949). Guarde el archivo en formato Unicode para evitar la pérdida de datos 1c: opencv2.2includeopencv2flanndist. h. Warning C4819: El archivo contiene un carácter que no se puede representar en la página de códigos actual (949). Guarde el archivo en formato Unicode para evitar la pérdida de datos 1c: documentos y configuracionescon práctica5example001imageload001exloadimagethresholdingwindows. cpp (87). Advertencia C4309: 3939. truncamiento de valor constante 1c: documentos y configuracionescom práctica5example001imageload001exloadimagethresholdingwindows. cpp (92). Advertencia C4309: 3939. truncamiento de valor constante 1c: documentos y configuracionescom práctica5example001imageload001exloadimagethresholdingwindows. cpp (97). Advertencia C4309: 3939. truncamiento de valor constante 1Generación de código. 1Linking. 1DebugingTool. obj. Error LNK2001: símbolo externo no resuelto private: static bool DebugingTool :: isDebugingMode (isDebugingModeDebugingTool0NA) 1main. obj. Error LNK2019: símbolo externo no resuelto public: thiscall HandAnalysis :: HandAnalysis (void) (1HandAnalysisQAEXZ) se hace referencia en la función main 1ThresholdingWindows. obj. Error LNK2001: símbolo externo no resuelto public: thiscall HandAnalysis :: HandAnalysis (void) (1HandAnalysisQAEXZ) 1main. obj. Error LNK2019: símbolo externo no resuelto public: struct IplImage thiscall HandAnalysis :: getBinaryImage (void) (getBinaryImageHandAnalysisQAEPAUIplImageXZ) referenciado en la función main 1main. obj. Error LNK2019: símbolo externo no resuelto public: struct IplImage thiscall HandAnalysis :: getRareImage (void) (getRareImageHandAnalysisQAEPAUIplImageXZ) referenciado en la función main 1ThresholdingWindows. obj. Error LNK2001: símbolo externo no resuelto public: struct IplImage thiscall HandAnalysis :: getRareImage (void) (getRareImageHandAnalysisQAEPAUIplImageXZ) 1main. obj. Error LNK2019: símbolo externo no resuelto public: void thiscall HandAnalysis :: recognizingHand (void) (recognisingHandHandAnalysisQAEXXZ) referenciado en la función main 1main. obj. Error LNK2019: símbolo externo no resuelto public: thiscall HandAnalysis :: HandAnalysis (struct IplImage const) (0HandAnalysisQAEPBUIplImageZ) se hace referencia en la función principal 1ThresholdingWindows. obj. Error LNK2019: símbolo externo no resuelto public: void thiscall HandAnalysis :: setThreshold (carácter sin signo, carácter sin signo, carácter sin signo, carácter sin signo, carácter sin signo, carácter sin signo) (setThresholdHandAnalysisQAEXEEEEEEZ) referenciado en la función public: void thiscall ThresholdingWindows :: showThresholdingImages (class HandAnalysis ) (ShowThresholdingImagesThresholdingWindowsQAEXVHandAnalysisZ) 1C: Documentos y Settingscom Practice5Example001ImageLoadDebug001exLoadImage. exe. Error fatal LNK1120: 7 externas no resueltas 1Build log se guardó en el archivo: // c: Documentos y Settingscom Practice5Example001ImageLoad001exLoadImageDebugBuildLog. htm 1001exLoadImage - 10 error (s), 7 warning (s) Constructor: 0 sucedido, 1 fallado, 0 up-to - Fecha, 0 omitido reserched muchos hilos sobre LNK2019 y LNK2001. La mayoría de las respuestas dicen que las funciones de chequeo están definidas y las dependencias son correctas. Entonces he comprobado estas cosas pero no puedo encontrar ninguna solución. Por favor, ayúdame. My sources are as below main. cpp ----------------------------------------------------------------------------- include HandAnalysis. h include ThresholdingWindows. h include stdio. h include opencv. hpp void main() IplImage frame CvCapture cam ThresholdingWindows thresholdWindows ThresholdingWindows() cam cvCreateCameraCapture(0) cvNamedWindow(RareImage, 1) cvNamedWindow(Mask, 1) while(1) char c cvWaitKey(0) //loopCount if(c 27) break frame cvQueryFrame(cam) if(frame NULL) break HandAnalysis hand HandAnalysis(frame) thresholdWindows. showThresholdingImages(hand) hand. recognizingHand() cvShowImage(RareImage, hand. getRareImage()) //fprintImage2BitSequence(frame) //fprintf3DepthImage2Int(3Depth, yCrCbFrame) //fprintf3DepthImage2Int(3Depth, frame) cvShowImage(Mask, hand. getBinaryImage()) HandAnalysis. h --------------------------------------------------------------- ifndef HANDANALYSISH define HANDANALYSISH ifndef OPENCVHPPISINCLUDED define OPENCVHPPISINCLUDED include opencv. hpp endif //OPENCVHPPISINCLUDED include stdio. h class HandAnalysis public: HandAnalysis(const IplImage rareImage) HandAnalysis() public: void recognizingHand() IplImage getBinaryImage() IplImage getRareImage() void setThreshold(unsigned char rlowerthreshold, unsigned char rupperthreshold, unsigned char glowerthreshold, unsigned char gupperthreshold, unsigned char blowerthreshold, unsigned char bupperthreshold) private: IplImage regionGrowing(IplImage rareImage, IplImage mask) IplImage regioninitial(IplImage rareImage, unsigned char rlowerthreshold, unsigned char rupperthreshold, unsigned char glowerthreshold, unsigned char gupperthreshold, unsigned char blowerthreshold, unsigned char bupperthreshold) CvPoint findAPointOnHand() public: int lowFactors3 int highFactors3 private: const IplImage rareImage IplImage formatCvtedImage IplImage binaryImage bool isRegionInitialed//whether region initial is performed or not / / / Title. HandAnalysis. cpp / / Author. J. S Choi / / This file is only about the class HandAnaylsis. This class release automatically all IplImages except rareImage / / include HandAnalysis. h define GETNBIT(a, n) ((a(1n)) 1:0)//macro to get binry presentation from decade //Constructor HandAnalysis::HandAnalysis(const IplImage rareImage). rareImage(rareImage) //setting high and low threshold of each channel this-lowFactors3 this-highFactors3 //region is not initialed yet this-isRegionInitialed false //get converted format image this-formatCvtedImage cvCreateImage(cvSize(this-rareImage-width, this-rareImage-height), this-rareImage-depth, this-rareImage-nChannels) cvCvtColor(this-rareImage, this-formatCvtedImage, CVBGR2YCrCb)//transform image format to YCrCb from BGR //Destroyor HandAnalysis:: void HandAnalysis::recognizingHand() this-binaryImage this-regioninitial(this-, lowFactors0, highFactors0, lowFactors1, highFactors1, lowFactors2, highFactors2) this-findAPointOnHand() IplImage HandAnalysis::getBinaryImage() return this-binaryImage IplImage HandAnalysis::getRareImage() return this-rareImage //set thresholding factors for binary image void HandAnalysis::setThreshold(unsigned char rlowerthreshold, unsigned char rupperthreshold, unsigned char glowerthreshold, unsigned char gupperthreshold, unsigned char blowerthreshold, unsigned char bupperthreshold) this-lowFactors0 rlowerthreshold, this-highFactors0 rupperthreshold this-lowFactors1 glowerthreshold, this-highFactors1 gupperthreshold this-lowFactors2 blowerthreshold, this-highFactors2 bupperthreshold IplImage HandAnalysis::regionGrowing(IplImage rareImage, IplImage mask) / on mask 0 is non masked pixel, others are masked pixels if mask is not initialized, return NULL, otherwise return a point on the hand / CvPoint HandAnalysis::findAPointOnHand() if(this-isRegionInitialed true) //binary image is initialed CvPoint result int pixelCount, totalHeight, totalWidth int row, column int i, j //Init params pixelCount 0 totalHeight 0 totalWidth 0 result cvPoint(0,0) //calibration a hand for(row0 rowthis-binaryImage-height row) for(column0 columnthis-binaryImage-width column) unsigned char pixel this-binaryImage-imageDatarowthis-binaryImage-widthStepcolumn if(pixel0) totalHeight row totalWidth column pixelCount result. x totalWidth/pixelCount result. y totalHeight/pixelCount //mark the point on the binary Image for(iresult. y-25 iresult. y25 i) for(jresult. x-25 jresult. x25 j) this-binaryImage-imageDataithis-binaryImage-widthStepj 125 return result else return NULL / in. IplImage rareImage return. mask of the valid region of rareImage. if the pixel is in valid region, mask39s element is 1. Otherwise element is 0. / IplImage HandAnalysis::regioninitial(IplImage rareImage, unsigned char rlowerthreshold, unsigned char rupperthreshold, unsigned char glowerthreshold, unsigned char gupperthreshold, unsigned char blowerthreshold, unsigned char bupperthreshold) int pixelindexx 0 int pixelindexy 0 int i 0 CvSize rareImagesize IplImage mask rareImagesize cvSize(rareImage-width, rareImage-height) mask cvCreateImage(rareImagesize, IPLDEPTH8U, 1)//IPLDEPTH8U 127 for(pixelindexx0 pixelindexxrareImage-height pixelindexx) for(pixelindexy0 pixelindexyrareImage-width pixelindexy) unsigned char r unsigned char g unsigned char b r (unsigned char)rareImage-imageDatapixelindexxrareImage-widthStep pixelindexyrareImage-nChannels g (unsigned char)rareImage-imageDatapixelindexxrareImage-widthStep pixelindexyrareImage-nChannels1 b (unsigned char)rareImage-imageDatapixelindexxrareImage-widthStep pixelindexyrareImage-nChannels2 if( (rlowerthresholdr rrupperthreshold)(glowerthresholdg ggupperthreshold) (blowerthresholdb bbupperthreshold) ) mask-imageDatapixelindexxmask-widthStep pixelindexy 255 else mask-imageDatapixelindexxmask-widthStep pixelindexy 0 //mask-origin 1 this-isRegionInitialed true return mask ThresholdingWindows. h ------------------------------------------------------------ ifndef THRESHOLDINGWINDOWSH define THRESHOLDINGWINDOWSH ifndef OPENCVHPPISINCLUDED define OPENCVHPPISINCLUDED include opencv. hpp endif //OPENCVHPPISINCLUDED include HandAnalysis. h class ThresholdingWindows public: ThresholdingWindows() virtual ThresholdingWindows() public: void showThresholdingImages(HandAnalysis hand) private: void setThresholdingTrackbar(char windowName, int plowerthreshold, int pupperthreshold) void setThresholdingWindows(int firstlower, int firstupper, int secondlower, int secondupper, int thirdlower, int thirdupper) void destroyThresholdingWindows() private: int firstlower, firstupper int secondlower, secondupper int thirdlower, thirdupper ThresholdingWindows. cpp ---------------------------------------------------------- include ThresholdingWindows. h ThresholdingWindows::ThresholdingWindows() firstlower (int)malloc(sizeof(int)) firstupper (int)malloc(sizeof(int)) secondlower (int)malloc(sizeof(int)) secondupper (int)malloc(sizeof(int)) thirdlower (int)malloc(sizeof(int)) thirdupper (int)malloc(sizeof(int)) firstlower 0 firstupper 0 secondlower 0 secondupper 0 thirdlower 0 thirdupper 0 this-setThresholdingWindows(firstlower, firstupper, secondlower, secondupper, thirdlower, thirdupper) ThresholdingWindows:: ThresholdingWindows() free(firstlower) free(firstupper) free(secondlower) free(secondupper) free(thirdlower) free(thirdupper) this-destroyThresholdingWindows() void ThresholdingWindows::setThresholdingTrackbar(char windowName, int plowerthreshold, int pupperthreshold) cvCreateTrackbar(Low, windowName, plowerthreshold, 255, NULL) cvCreateTrackbar(High, windowName, pupperthreshold, 255, NULL) void ThresholdingWindows::setThresholdingWindows(int firstlower, int firstupper, int secondlower, int secondupper, int thirdlower, int thirdupper) cvNamedWindow(First, 1) cvNamedWindow(Second, 1) cvNamedWindow(Third, 1) setThresholdingTrackbar(First, firstlower, firstupper) setThresholdingTrackbar(Second, secondlower, secondupper) setThresholdingTrackbar(Third, thirdlower, thirdupper) void ThresholdingWindows::showThresholdingImages(HandAnalysis hand) //Declaration //IplImage eachChannelImage3 IplImage rareimage hand. getRareImage() IplImage firstimage cvCreateImage(cvSize(rareimage-width, rareimage-height), IPLDEPTH8U, 1)//32F IplImage secondimage cvCreateImage(cvSize(rareimage-width, rareimage-height), IPLDEPTH8U, 1) IplImage thirdimage cvCreateImage(cvSize(rareimage-width, rareimage-height), IPLDEPTH8U, 1) int pixelindexx 0 int pixelindexy 0 firstimage-origin 1 secondimage-origin 1 thirdimage-origin 1 //output fopen(output. txt, wb) //Implementation for(pixelindexx0 pixelindexxrareimage-height pixelindexx) for(pixelindexy0 pixelindexyrareimage-width pixelindexy) unsigned char pixelfirst, pixelsecond, pixelthird pixelfirst (unsigned char)rareimage-imageDatapixelindexxrareimage-widthStep pixelindexyrareimage-nChannels pixelsecond (unsigned char)rareimage-imageDatapixelindexxrareimage-widthStep pixelindexyrareimage-nChannels 1 pixelthird (unsigned char)rareimage-imageDatapixelindexxrareimage-widthStep pixelindexyrareimage-nChannels 2 / firstimage-imageDatapixelindexxfirstimage-widthStep pixelindexy pixelfirst secondimage-imageDatapixelindexxfirstimage-widthStep pixelindexy pixelsecond thirdimage-imageDatapixelindexxfirstimage-widthStep pixelindexy pixelthird/ //fprintf(output, (3d,3d,3d). firstlower, pixelfirst, firstupper) if(( (unsigned char)firstlowerpixelfirst) (pixelfirst(unsigned char)firstupper)) firstimage-imageDatapixelindexxfirstimage-widthStep pixelindexy 255 else firstimage-imageDatapixelindexxfirstimage-widthStep pixelindexy 0 if(((unsigned char)secondlowerpixelsecond) (pixelsecond(unsigned char)secondupper)) secondimage-imageDatapixelindexxsecondimage-widthStep pixelindexy 255 else secondimage-imageDatapixelindexxsecondimage-widthStep pixelindexy 0 if(((unsigned char)thirdlowerpixelthird) (pixelthird(unsigned char)thirdupper)) thirdimage-imageDatapixelindexxthirdimage-widthStep pixelindexy 255 else thirdimage-imageDatapixelindexxthirdimage-widthStep pixelindexy 0 //fprintf(output, n) //fclose(output) / cvThreshold(firstimage, firstimage, firstlower, 255, CVTHRESHTOZERO) cvThreshold(firstimage, firstimage, firstupper, 255, CVTHRESHTOZEROINV) cvThreshold(secondimage, secondimage, secondlower, 255, CVTHRESHTOZERO) cvThreshold(secondimage, secondimage, secondupper, 255, CVTHRESHTOZEROINV) cvThreshold(thirdimage, thirdimage, thirdlower, 255, CVTHRESHTOZERO) cvThreshold(thirdimage, thirdimage, thirdupper, 255, CVTHRESHTOZEROINV)/ / fprintf1DepthImage2Int(first, firstimage) fprintf1DepthImage2Int(second, secondimage) fprintf1DepthImage2Int(third, thirdimage) / hand. setThreshold((unsigned char)(firstlower), (unsigned char)(firstupper), (unsigned char)(secondlower), (unsigned char)(secondupper), (unsigned char)(thirdlower), (unsigned char)(thirdupper)) cvShowImage(First, firstimage) cvShowImage(Second, secondimage) cvShowImage(Third, thirdimage) cvReleaseImage(firstimage) cvReleaseImage(secondimage) cvReleaseImage(thirdimage) DebugingTool. h ---------------------------------------------------------------- ifndef DEBUGINGTOOLH define DEBUGINGTOOLH ifndef OPENCVHPPISINCLUDED define OPENCVHPPISINCLUDED include opencv. hpp endif //OPENCVHPPISINCLUDED include stdio. h DebugingTool() public: static void setMode(bool mode) static void fprintImage2BitSequence(char name, IplImage inputImage) static void fprintf3DepthImage2Int(char name, IplImage inputImage) static void fprintf1DepthImage2Int(char name, IplImage inputImage) static void fprintfImage2Int2(IplImage inputImage) static bool isDebugingMode// false void DebugingTool::setMode(bool mode) isDebugingMode mode void DebugingTool::fprintImage2BitSequence(char name, IplImage inputImage) if(isDebugingMode) FILE image2bit int pixelCount, bitCount, factorIndex image2bit fopen(strcat(name, image2bit. txt), wb) for(pixelCount0 pixelCountinputImage-imageSize/inputImage-nChannels pixelCount) char factor3 factor0 inputImage-imageDatapixelCount3 factor1 inputImage-imageDatapixelCount31 factor2 inputImage-imageDatapixelCount32 fprintf(image2bit, () for(factorIndex0 factorIndex3 factorIndex) for(bitCount7 bitCount-1 bitCount--) fprintf(image2bit, d, GETNBIT(factorfactorIndex, bitCount)) fprintf(image2bit. ) fprintf(image2bit, ) ) if(pixelCountinputImage-width 0) fprintf(image2bit, n) fclose(image2bit) void DebugingTool::fprintf3DepthImage2Int(char name, IplImage inputImage) if(isDebugingMode) FILE image2Int int pixelCount, bitCount, factorIndex char filename100 strcat(filename, name) strcat(filename, image2Int. txt) image2Int fopen(filename, wb) //image2Int fopen(image2Int. txt, wb) for(pixelCount0 pixelCountinputImage-imageSize/inputImage-nChannels pixelCount) int factor3 factor0 (unsigned char)inputImage-imageDatapixelCount3 factor1 (unsigned char)inputImage-imageDatapixelCount31 factor2 (unsigned char)inputImage-imageDatapixelCount32 fprintf(image2Int, () for(factorIndex0 factorIndex3 factorIndex) fprintf(image2Int, d, factorfactorIndex) fprintf(image2Int. ) fprintf(image2Int, ) ) if(pixelCountinputImage-width 0) fprintf(image2Int, n) fclose(image2Int) void DebugingTool::fprintf1DepthImage2Int(char name, IplImage inputImage) if(isDebugingMode) FILE image2Int int pixelCount, bitCount, factorIndex char filename100 strcat(filename, name) strcat(filename, image2Int. txt) image2Int fopen(filename, wb) for(pixelCount0 pixelCountinputImage-imageSize pixelCount) int factor factor (unsigned char)(inputImage-imageDatapixelCount) fprintf(image2Int, d. factor) if(pixelCountinputImage-width 0) fprintf(image2Int, n) fclose(image2Int) void DebugingTool::fprintfImage2Int2(IplImage inputImage) if(isDebugingMode) FILE image2Int int pixelCount, bitCount, factorIndex image2Int fopen(image2Int. txt, wb) for(pixelCount0 pixelCountinputImage-imageSize/inputImage-nChannels pixelCount) char factor3 factor0 inputImage-imageDatapixelCount3 factor1 inputImage-imageDatapixelCount31 factor2 inputImage-imageDatapixelCount32 fprintf(image2Int, () for(factorIndex0 factorIndex3 factorIndex) fprintf(image2Int, d, factorfactorIndex) fprintf(image2Int. ) fprintf(image2Int, ) ) if(pixelCountinputImage-width 0) fprintf(image2Int, n) fclose(image2Int) -------------------------------------------------------------------------------- PS. sorry for my bad english I tried what you tell me. but it didn39t solve the problem. But I solve the problem with Clean Solution and Rebuild. Gracias por tu respuesta. Buena suerte. DB:2.50:Jpeg Save Corrupted ja Hello, I have a very simple program in linux. cout quotHELLOquot string imagequot/home/ferru001/grayimages/00000052.jpgquot CIppImage grayimage CStdFileInput in CStdFileOutput out in. Open(image. cstr()) PARAMSJPEG mparamjpeg JERRCODE jerr mparamjpeg. nthreads1 mparamjpeg. useqdctfalse mparamjpeg. quality100 jerr ReadImageJPEG(in, mparamjpeg, grayimage) in. Close() if( JPEGOK jerr ) cout quotJPEG READ OKquot endl cout quotWRITING JPEGquot endl out. Open( quot/home/ferru001/graytest. jpgquot ) jerr SaveImageJPEG ( grayimage, mparamjpeg, out ) out. Close() if( JPEGOK jerr ) cout quotSAVE SUCCESSquot endl else cout quotSAVE FAILEDquot the ReadImageJPEG and SaveImageJPEG are form UIC. The outputted image comes out corrupt. Only a segment of it is written and when I open it in GIMP the following comes up: Corrupt JPEG data: 15046 extraenous bytes after marker 0xd9 EXIF data will be ignored Corrupt JPEG data: found marker 0xd9 instead of RST0 Corrupt JPEG data: premature of data segment. Any ideas why this is happening Am I forgetting to do something You are welcome :) I think if you left same parameters not initialized they may influence the codec behaviour. For example, if JPEG comments control parameters are not zero then encoder will try to embedd comment string into encoded JPEG (and if these parameters not initialized properly comment will be taken from random memory address or comment string will have random length) RELEVANCY SCORE 2.48 DB:2.48:Segmentation Using Labels cd I wanted to do a function that segment a grayscale image, but it seems that the number of resulting labels is always 1 for my function. I think the problem comes from the fact that I have only a few 39039 in my grayscale source image. Does it means that I need to preprocess my picture before segmenting with IPP How have I done an error Here is my code: IplImage Segmentation(IplImage pcvGrayImg) CvSize cvSize IppiSize RoiSize int bufferSize, numLabels int minLabel 1 int maxLabel 254 cvSize. height pcvGrayImg-height cvSize. width pcvGrayImg-width IplImage pcvSegImg cvCreateImage( cvSize, 8, 1 ) RoiSize. width pcvGrayImg-width RoiSize. height pcvGrayImg-height ippiLabelMarkersGetBufferSize8uC1R(RoiSize, bufferSize) Ipp8u buffer ippsMalloc8u(bufferSize) ippiLabelMarkers8uC1IR((Ipp8u) pcvGrayImg-imageData, pcvGrayImg-widthStep, RoiSize, minLabel, maxLabel, ippiNormInf, numLabels, buffer) printf(quotNumlabels: dnquot, numLabels) Thanks for your help Please take a look at this thread, similar problem. software. intel/en-us/forums/showthread. phpt65142 RELEVANCY SCORE 2.47 DB:2.47:Displaying Video With Opencv kp Hi I am trying to play an AVI video using a short program. // AVI Stream. cpp. // This program displays an AVI video and performs processing on it // and thus displays the processed video. // include quotstdafx. hquot include quothighgui. hquot include quotcv. hquot include include using namespace std int main(int argc, char argv) /IplImage frame int key CvCapture capture cvCaptureFromAVI(quotmusic. aviquot) if(capture) return 1 int framesPerSecond (int)cvGetCaptureProperty(capture, CVCAPPROPFPS) while(key 39q39) frame cvQueryFrame(capture) //frame cvGrabFrame(capture) //frame1 cvRetrieveFrame(capture) //cvShowImage(quotExamplequot, frame1) //while(1) // //frame cvQueryFrame(capture) //if(frame) break //cvShowImage(quotVideo Examplequot, frame) //char c cvWaitKey(1000) //if(c 27) break // cvNamedWindow(quotVideoquot, CVWINDOWAUTOSIZE) CvCapture capture cvCreateFileCapture(quotmusic. aviquot) IplImage frame while(1) frame cvQueryFrame(capture) if(frame) break cvShowImage(quotVideoquot, frame) char c cvWaitKey(10000000) if(c 27) break //frame cvQueryFrame(capture) //cout quotThis is what is stored in capture: quot capture //cvShowImage(quotVideoquot, capture) cvReleaseCapture(capture) cvDestroyWindow(quotVideoquot) //cout quotThis is what is stored in capture: quot capture Unfortunately, a window displays with the name of the window, but no video plays. HELP ME PLEASE. I don39t know if there is a codec I need or what. The variable that holds my capture structure is 0. As far as I know, OpenCV uses ffmpeg for video decoding. You may need to ensure that compression format used in your AVI file is supported by ffmpeg. DB:2.47:Opencv And Ipp 9s I39m using Ipp and OpenCV in a same program and foudn a very strange behavior while using cvNamedWindow and ippiMalloc. Here is a very easy example code (just replace quotPicturequot by another folder name) that always bug after the fourth iteration. include include include include quotcv. hquot include quothighgui. hquot include quotipp. hquot / DISPLAY AN OPENCV IMAGE / IplImage pcvImgNULL pcvImg cvLoadImage(quotPicturequot, CVLOADIMAGECOLOR) cvNamedWindow(quotOpenCV picturequot, CVWINDOWAUTOSIZE) cvShowImage(quotOpen picturequot, pcvImg) cvWaitKey(0) cvReleaseImage(pcvImg) / COMPUTE WITH IPP / int pitchproba, m,n Ipp32f ProbaNULL ProbaippiMalloc32fC1(4,5,pitchproba) for (m0m5m) for (n0n4n) Probanmpitchproba(Ipp32f) (0.0) printf(quotf quot, Probanmpitchproba) printf(quotnquot) ippiFree(Proba) Do you have any idea how to manage that Hm, what kind of error you talk about Do you mean printf operator does not show expected values or there is access violation or what BTW, instead of raw loop for initialization of image data (where it is easy to make address arithmetic mistakes) you may want to try ippiSet32fC1R function, so your loop might be rewritten like piece of code below ProbaippiMalloc32fC1(4,5,pitchproba) IppiSize roi ippiSet32fC1R(0.0, Proba, pitchproba, roi) ippiFree(Proba) RELEVANCY SCORE 2.46 DB:2.46:Cbitmap - Simple Example m1 Hi, I am trying to load some images in a Picture Area. I only have the pointers to the images, and this pointersare of the type unsigned long (I can also convert it to IplImageto use with OpenCV). This unsigned long get the images grabbed from an camera andI would like to display these images in this Picture Area (it is real time, I get the images from my camera and display them).I have tryied to use this (HBITMAP)LoadImage function, but the last parameter is always LOADIMAGEFROMFILE but this is not realin my application. So, can you help me with convert my unsigned long to an Bitmapand than load it on the exacly place that I want, on my applicationIs this the best solutionMany thanks, Gordo Ricardo, sorry for not mentioning it again (it was in one of my earlier posts), I was talking about the CImage class described in msdn. microsoft/library/en-us/vclib/html/vcrefcimage. aspThis is wrapper class around GDI functions for handling and drawing images and should give you an easy start. Regards, Bernd DB:2.43:Detection Of Component ks I have one problem: fatal error C1083: Cannot open include file: 39BlobResult. h39: No such file or directory help me how add BlobResult. h // DetectBlobs. cpp. Defines the entry point for the console application. include stdafx. h include BlobResult. h include cv. h include cxcore. h include highgui. h int tmain(int argc, TCHAR argv) // Initialise //std::string filepath spots. bmp std::string filepath spots2.bmp CBlobResult blobs int numblobs 0 // Load grayscale version of coloured input image IplImage original cvLoadImage( filepath. cstr() ) IplImage grayscale cvLoadImage( filepath. cstr(), CVLOADIMAGEGRAYSCALE ) // Check bitmap image exists assert( grayscale ) // Create IplImage struct for a black and // white (binary) image IplImage imgbw cvCreateImage( cvGetSize( grayscale ), IPLDEPTH8U, 1 ) // Use thresholding to convert grayscale image // into binary cvThreshold( grayscale, // source image imgbw, // destination image 40, // threhold val. 255, // max. val CVTHRESHBINARY ) // binary // Create IplImage struct for inverted black // and white image IplImage imgbwinv cvCloneImage( imgbw ) IplImage imgbwcpy cvCloneImage( imgbw ) // Find connected components using cvBlobsLib blobs CBlobResult( imgbw, 0, 255 ) // Exclude all blobs smaller than the given value // The bigger the last parameter, the bigger the // blobs need to be for inclusion blobs. Filter( blobs, BEXCLUDE, CBlobGetArea(), BGREATER, 3 ) // Find connected components using OpenCV CvSeq seq CvMemStorage storage cvCreateMemStorage( 0 ) cvClearMemStorage( storage ) // cvFindContours the 12 1 extra object for // white backgrounds and black spots, hence // subtract 1 numblobs cvFindContours( imgbw, storage, seq, sizeof( CvContour ), CVRETRLIST, CVCHAINAPPROXNONE, cvPoint( 0, 0 ) ) - 1 // Display the input / output windows and images cvNamedWindow( original ) cvShowImage( original, original ) cvNamedWindow( grayscale ) cvShowImage( grayscale, grayscale ) cvNamedWindow( blackandwhite ) cvShowImage( blackandwhite, imgbwcpy ) // Wait for user key press and then tidy up cvWaitKey(0) cvReleaseImage( original ) cvReleaseImage( grayscale ) cvReleaseImage( imgbw ) cvReleaseImage( imgbwinv ) cvReleaseImage( imgbwcpy ) cvDestroyWindow( greyscale ) cvDestroyWindow( blackandwhite ) cvDestroyWindow( inverted ) Hi, I39m baeckgoo I tried to converting RGB image in pxcimage into data in IplImage. But, I don39t know how to depth image of pxcimage into IplImage data. I know, type of depth data in pxcimage is float. how can I convert this into IplImage Can I get a kind of hint or explanation my code is following. cf) when I39d like to get depth data, I tried to using unsigned char instead of float type. But, it fail. include cv. h include highgui. h include stdio. h //include quotstdafx. hquot include quotutilrender. hquot include quotutilpipeline. hquot int main() UtilPipeline pipeline pipeline. EnableImage(PXCImage::COLORFORMATRGB24,640,480) pipeline. EnableImage(PXCImage::COLORFORMATDEPTH,320,240) //depth resolution 320,240 maximum pipeline. Init() UtilRender colorrender(LquotColor Streamquot) UtilRender depthrender(LquotDepth Streamquot) ///////////// OPENCV IplImage image0 CvSize gabsize gabsize. height480 gabsize. width640 imagecvCreateImage(gabsize,8,3) IplImage depth0 CvSize gabsizedepth gabsizedepth. height240 gabsizedepth. width320 depthcvCreateImage(gabsize,8,1) //PXCImage colorimage0 PXCImage::ImageData data PXCImage::ImageData datadepth unsigned char rgbdata//new unsigned char float depthdata //rgbdata(unsigned char)image-imageData PXCImage::ImageInfo rgbinfo PXCImage::ImageInfo depthinfo cvNamedWindow(quotdepthcv2quot,0) cvResizeWindow(quotdepthcv2quot,320,240) /////// for () if (pipeline. AcquireFrame(true)) break PXCImage colorimagepipeline. QueryImage(PXCImage::IMAGETYPECOLOR) PXCImage depthimagepipeline. QueryImage(PXCImage::IMAGETYPEDEPTH ) colorimage-AcquireAccess(PXCImage::ACCESSREADWRITE, PXCImage::COLORFORMATRGB24,data) // release depthimage-AcquireAccess(PXCImage::ACCESSREAD, datadepth) //depthimage-AcquireAccess(PXCImage::ACCESSREADWRITE, PXCImage:: //(unsigned char)image-imageDatadata. planes0 rgbdatadata. planes0 depthdata(float)datadepth. planes0 //( (float) error , (float) error . printf(quot0fnquot, depthdata0) //921600 0.. 6404803921600 . 921599 . // printf(quot1fnquot, depthdata1) // printf(quot2fnquot, depthdata2) // printf(quot3fnquot, depthdata3) // printf(quot4fnquot, depthdata4) pxcStatus stat1 depthimage-QueryInfo(depthinfo) // pxcStatus stat2 colorimage-QueryInfo(rgbinfo) int w1depthinfo. width int h1depthinfo. height // printf(quotwd, hdnquot, w1,h1) // int w2rgbinfo. width // int h2rgbinfo. height for(int y0 y240 y) for(int x0 x320 x) depth-imageDatay320xdepthdatay320x colorimage-ReleaseAccess(data) depthimage-ReleaseAccess(datadepth) cvShowImage(quotrgbcvquot, image) cvShowImage(quotdepthcv2quot, depth) /////////////opencv if( cvWaitKey(10) 0 ) break //if (colorrender. RenderFrame(colorimage)) break // . if (depthrender. RenderFrame(depthimage)) break pipeline. ReleaseFrame() Can you share code to convert RGB bitmap image into PXCImage RELEVANCY SCORE 2.41 DB:2.41:My Code No Error. But Not Run. Help Me To Run This Program 7j // loadandsaveimage. cpp. Defines the entry point for the console application. include stdafx. h include cv. h include highgui. h include math. h // Load the source image. HighGUI use. IplImage image02 0, image03 0, image04 0 double area30 int countnumber 0 double value void processimage(int h) int main( int argc, char argv ) image03 cvLoadImage(01.bmp) // Create the destination images image02 cvCloneImage( image03 ) image04 cvCloneImage( image03 ) // Create toolbars. HighGUI use. processimage(0) // Wait for a key stroke the same function arranges events processing cvWaitKey(0) cvReleaseImage(image02) cvReleaseImage(image03) // Define trackbar callback functon. This function find contours, // draw it and approximate it by ellipses. void processimage(int h) double angle //int a CvMemStorage stor CvSeq cont CvBox2D32f box CvPoint PointArray CvPoint2D32f PointArray2D32f int r, c // Create dynamic structure and sequence. stor cvCreateMemStorage(0) cont cvCreateSeq(CVSEQELTYPEPOINT, sizeof(CvSeq), sizeof(CvPoint). stor) // Find all contours. cvFindContours( image02, stor, cont, sizeof(CvContour), CVRETRLIST, CVCHAINAPPROXNONE, cvPoint(0,0)) // Clear images. IPL use. cvZero(image02) // This cycle draw all contours and approximate it by ellipses. for(contcont cont-hnext) int i // Indicator of cycle. int count cont-total // This is number point in contour CvPoint center CvSize size // Number point must be more than or equal to 6 (for cvFitEllipse32f). if( count 6 ) continue // Alloc memory for contour point set. PointArray (CvPoint)malloc( countsizeof(CvPoint) ) PointArray2D32f (CvPoint2D32f)malloc( countsizeof(CvPoint2D32f) ) // Alloc memory for ellipse data. box (CvBox2D32f)malloc(sizeof(CvBox2D32f)) // Get contour point set. cvCvtSeqToArray(cont, PointArray, CVWHOLESEQ) //Copies sequence to one continuous block of memory // Convert CvPoint set to CvBox2D32f set. for(i0 icount i) PointArray2D32fi. x (float)PointArrayi. x PointArray2D32fi. y (float)PointArrayi. y // Fits ellipse to current contour. cvFitEllipse(PointArray2D32f, count, box) // Draw current contour. cvDrawContours(image04,cont, CVRGB(255,255,255),CVRGB(255,255,255),0,1,8,cvPoint(0,0)) cvNamedWindow(contour,1) cvShowImage(contour, image04) // Convert ellipse data from float to integer representation. center. x cvRound(box-center. x) center. y cvRound(box-center. y) size. width cvRound(box-size. width0.5) size. height cvRound(box-size. height0.5) box-angle - box-angle // Draw ellipse. // if (areacountnumber 500) // cvEllipse(image04, center, size, box-angle, 0, 360, CVRGB(255,255,255), 1, CVAA, 0) angle box-angle23.1415926/360 for( r 0 r image04-height r ) for( c 0 c image04-width c ) value ((c - box-center. x)cos(angle) (r - box-center. y)sin(-angle))((c - box-center. x)cos(angle) (r - box-center. y)sin(-angle) )/(0.25box-size. widthbox-size. width) value value (-(c - box-center. x)sin(-angle) (r - box-center. y)cos(angle))(-(c - box-center. x)sin(-angle) (r - box-center. y)cos(angle))/ (0.25box-size. heightbox-size. height) if ( value 1) // // cvNamedWindow(aa,1) // cvShowImage(aa, image04) // countnumber countnumber 1 // Free memory. free(PointArray) free(PointArray2D32f) free(box) // Show image. HighGUI use. cvShowImage( Result, image04 ) a. b. Hi bitochekonam, I have been watching on this issue for a while now. I found it is an issue related with OpenCV function. I suggest you move to this page for more useful information: opencv. willowgarage/wiki/Welcome/Support. And this thread will be moved toOff-Topic Posts. Thanks for your understanding and active participation in the MSDN Forum. Best regards, Helen Zhao MSFT MSDN Community Support Feedback to us RELEVANCY SCORE 2.41 DB:2.41:Error With Ippiconvert8u1uC1r 7s I have an application that reads JPEG39s, converts them to grayscale and then binarizes them using ippiConvert8u1uC1R function. When I save the binarized image and view it, it looks squished horitontally (I think I39m not setting the dstStep correctly or something). Below is the relevant piece of code: //mimage. Precision() is 8 monochromeimage. Alloc( grayscaleimage. Size(), 1, mimage. Precision() ) //width: 5104, Height: 2204 IppiSize roiSize IppStatus mystatus mystatus ippiConvert8u1uC1R( (const Ipp8u)grayscaleimage. DataPtr(),grayscaleimage. Step(), (Ipp8u)monochromeimage, monochromeimage. Width(),0,roiSize,(Ipp8u)148) I looked at the available documentation but all they had was a description of dstStep for bitonal images, not a sample of how to set it. grayscaleimage and monochrome image are both of type CIppImage. Prior to binarizing, I was saving the grayscale image on disk and it looked fine. Can anyone help me in this issue Thanks Hello Ying, turns out I had it right and I was not using correct method of encoding image. I was encoding using LOSSLESS JPEG compression and assumed that the JPEG viewer would tread bitpacked image correctly (unfolding bits to be individual pixels). I have since modified this to encoding buffer to CCITTT6 encoded TIFF and it works (I used libtiff). I have question though. something weird happens. I use pretty much exact code as you put there. I convert image using convert8u1uC1R and then encode using libtiff. the image shows fine and it39s bitonal HOWEVER, the image colors are ALWAYS inverted, blacks are whites and blacks are where whites are supposed to be (no matter if I put MINISBLACK or MINISWHITE, when encoding. My own binarizing method works fine when I use that to create bitonal buffer and encode it). Does convert operation use 0 or 1 for white Also I39m assuming that Convert function treat pixels in order Most Significant to Least Significant Alessandro, As to ippiConvert8u1u itself, it use 1 for white. 0 for black as the above image (255-1). Regards, Ying RELEVANCY SCORE 2.40 DB:2.40:Copy Constructor And Operator Are Missing In Cippimage aj Hi, The copy constructor and operator should be defined in CIppImage as this class contains a pointer to mimageData. If implicit copy constructor or assignment operator are called, the destructor tries to release mimageData twice, and fails. Thank you. Hi, The copy constructor and operator should be defined in CIppImage as this class contains a pointer to mimageData. If implicit copy constructor or assignment operator are called, the destructor tries to release mimageData twice, and fails. Thank you. Once again you are right We didn39t need this functionality for UIC, but for any other usage assignment operator should be in place. RELEVANCY SCORE 2.38 Welcome to the forums You would need Adobe Acrobat software to convert pdf files to Excel, if you already have it then please check the link help. adobe/enUS/acrobat/X/pro/using/WS58a04a822e3e50102bd615109794195ff-7eeb. w.html to convert. If you do not have the software then please check the link. adobe/products/acrobatpro. html to purchase. You can also consider Export PDF subscription based service, please check. acrobat/exportpdf/en/home. htmltrackingidJIOJU for more info. RELEVANCY SCORE 2.37 DB:2.37:IppiresizesqlPixel16uC1r Returning -23 (Ippstsresizefactorerr) 7f im trying tor esize a 16bit image but im always getting back -23 error, and xFactor and yFactor are definetly not 0 or close to 0 (in this case they are close to 32)Im using latest ipp 7.0 with IPPINTERSUPER as interpolation method(pretty much i created a copy of CIppImage::Resize method so it can handle 16bit dataand replaced all ippiResizeSqlPixel8u calls with 16u calls(currently CIppImage::Resize can handle 8bit images only)Could it be issue with ippiResizeGetBufSize. (no parameters for bit size of the data it is going to be used for) AccordingIPP documentation Super Sampling interpolation can only be used to reduce image resolution. So when one try to use it to enlarge image something is definetely wrong either interpolation mode or resize factors. I agree the difference between new anddeprecated implementation of resize operation should be clearly stated in documentation. Also thanks for pointing to UIC image sample class, we will correct that. RELEVANCY SCORE 2.37 DB:2.37:Import Video From Camera sm You can access the pixels of an Iplimage. So what you39ll have to do is convert the iplimage to use the correct color format (i think it39s rgba32) and send this data backt to unity to modify your texture2d. The fastest way (imo) to do this is create a Texture2D or pixel(Color) array in unity, then send a pointer to that array to opencv and memcpy the pixels of your currentframe to the pointer. Best thing is to do this in a separate thread so your webcams framerate won39t block your games fps. no no, not through sockets. actually i used some plugin /dll stuff to do this, i dont remember by now to do this. but i finally did it. if u need info, i will try to go through old source code files. RELEVANCY SCORE 2.37 DB:2.37:Undefined Reference To 39Ippifiltermedian8uC3r39 d8 No idea of why it is so. I followed the building procedures in /doc/GettingStarted. htm Follow the steps below to build your application using the IntelIPP shared object finding libraries: Note: You must link the ippcore functions statically (using libippcore. a) Call ippStaticInitBest() instead to turn off the shared object finding feature (always uses statically linked code) Calling ippStaticFree() releases the resources used by ippStaticInit(), and then calls the ippStaticInitBest() function Include ipp. h, libippcore. a, and the appropriate libippemerged. a file(s) to your project and build tree. Make sure the appropriate libippmerged. a file(s) are in your build tree Call ippStaticInit() before calling other Intel IPP functions Call the IntelIPP functions required in your application source code I added libippcore. a, and I succeeded in compiling my code, but failed to link. include include include // only for direct call to ippiFilterMedian8uC3R include int main(int, char) IppStatus status ippStaticInit() const int M3 IppiSize msz IppiPoint ma IplImage imgcvLoadImage(quot/usr/local/share/opencv/samples/c/lena. jpgquot,1) IplImage med1cvCreateImage(cvGetSize(img),8,3) IplImage med2cvCloneImage(med1) int64 t0 cvGetTickCount(),t1,t2 IppiSize sz double isz1./(img-widthimg-height) cvSmooth(img, med1,CVMEDIAN, M) // use IPP via OpenCV interface t0cvGetTickCount()-t0 cvUseOptimized(0) // unload IPP t1 cvGetTickCount() cvSmooth(img, med1,CVMEDIAN, M) // use C code t1cvGetTickCount()-t1 t2cvGetTickCount() ippiFilterMedian8uC3R( // use IPP directly CVIMAGEELEM(img, uchar, M/2,M/23), img-widthStep, CVIMAGEELEM(med1,uchar, M/2,M/23), med1-widthStep, sz, msz, ma ) t2cvGetTickCount()-t2 printf(quott0.2f, t1.2f, t2.2fnquot, (double)t0isz,(double)t1isz,(double)t2isz) return 0 Seriously have no idea which static libippmerged. a should be added for link. I was considering why I can39t dynamically link those. so files. Why IPP is so hard to use Anyway, can anybody give me a clue Thank you very much. SESSIONID session IMAQdxError status uInt32 bufferNumber Image image / Initialize camera here / status IMAQdxOpenCamera (quotcam0quot, IMAQdxCameraControlModeController, session) if( status ) std::cerr quotCould not open cameranquot status IMAQdxConfigureGrab (session) if( status ) std::cerr quotCould not configure cameranquot status IMAQdxGrab (session, image, TRUE, bufferNumber) if( status ) std::cerr quotCould not grabnquot ImageInfo info imaqGetImageInfo(image, info) // 4 Channels seems to work for us. We have a color camera, so you would expect 3 // channels, but I think the extra one just basically fills up the last 8 bits of a 32-bit color IplImage ret cvCreateImageHeader( cvSize(info. xRes, info. yRes), IPLDEPTH8U, 4 ) std::cerr info. pixelsPerLine quot quot info. xRes quot quot info. yRes quotnquot std::cerr info. border quotnquot ret-imageData (char)info. imageStart return ret RELEVANCY SCORE 2.32 DB:2.32:Convert kf how to convert a sapscript to smartforms SAP provides a conversion for SAPscript documents to SMARTforms. There is basically a function module, called FBMIGRATEFORM. You can start this function module by hand (via SE37), or create a small ABAP which migrates all SAPscript forms automatically. You can also do this one-by-one in transaction SMARTFORMS, under Utilities - Migrate SAPscript form. You could also write a small batch program calling transaction SMARTFORMS and running the migration tool. for migration of sripts to smartforms 1. In Reporting select the program SFMIGRATE and execute it. 2. Select the names and the language of the SAPscript forms and choose Execute. The system creates the Smart Forms under the names of the SAPscript forms plus the extension SF. It displays a list of the migrated forms. 3. To change and adapt a form, go to transaction SMARTFORMS. Then activate the changed Smart Form. RELEVANCY SCORE 2.32 DB:2.32:Cippimage (Ippimage.(HCpp) ) Different In Different Folders m7 CIppimage (ippimage.(hcpp) ) different in different folderspicnic and uictranscodercon have different versions of CIppImage classthis is in wipp-samplesp7.0.7.064also wicuiccodec CIppImage class looks same as the one in picnic (both without updates/changes from release notes) Hi Aris, Thank you for noticing this. ou are right, we need to unify both interfaces. We will join them, may be even makingCIppImage a part of common UIC interface. Regards, Sergey RELEVANCY SCORE 2.32 DB:2.32:How To Solve Error C2039: Cfiledialog. Is Not A Member Of Cwnd zp i wanted to have an event handler where on button click, i can open an image. but i39m having problem about using CFileDialog (Error C2039: 39CFileDialog39. is not a member of 39CWnd39) here is my code: void MyAlbum::OnBnClickedBnbrowse() GetDlgItem( IDCBnBrowse ) - // TODO: Add your control notification handler code here //Error 2039-------------------------- CFileDialog dlg(TRUE, NULL, NULL, OFNFILEMUSTEXIST OFNPATHMUSTEXIST OFNHIDEREADONLY, T(image files (.bmp. jpg).bmp. jpg. jpeg All Files (.) .), NULL) dlg. mofn. lpstrTitle T(Open Image) if( dlg. DoModal() IDOK ) return CString mPath dlg. GetPathName() IplImage ipl cvLoadImage( mPath, 1 ) if( ipl ) return if( TheImage ) cvZero( TheImage ) ResizeImage( ipl ) ShowImage( TheImage, IDCIquery ) cvReleaseImage( ipl ) is that because i missing some header file or something I tried to search on google and also forum, but i failed to find any solutions. I am sorry that what you found DB:2.30:Best Way To Encode A 16 Bit Image To Grayscale Jpeg Image With JpegLossless Option 9x I am new to IPP UIC codecs. My requirement is to encode a 16 bit pixel image to grayscale jpeg format. I need to maintain high quality with lossless compression. Here is whatI am doing: CIppImage imageIn PARAMSJPEG param param. nthreads 1 param. quality 100 param. color ICGRAY param. mode JPEGLOSSLESS param. sampling IS444 param. restartinterval 0 param. huffmanopt 0 param. pointtransform 0 param. predictor 1 param. dctscale JD11 parammentsize 0 CMy16BitImage sourceImg //Get the source image int nHeight(sourceImg. Height()), nWidth(sourceImg. Width()) unsigned short pSourceImgBuff sourceImg. GetImageBuffer() unsigned char pImageBuffer new unsigned charnHeightnWidth //Construct the destination image buffer unsigned char pTempBuff pSourceImgBuff for(int i 0 i nHeight i) for(int j 0 j nWidth j) pTempBuff pSourceImgBuff pTempBuff pSourceImgBuff imageIn. Attach(nHeight, nWidth, 1, 8, (void)pSourceImgBuff, 0) CStdFileOutput foJPEG if(BaseStream::IsOk(foJPEG. Open(quotC:testOut. jpegquot))) return 1 imageIn. Sampling(IS444) imageIn. Color(ICGRAY) SaveImageJPEG(imageIn, param, foJPEG) Result: JPEG Image has generated but unable to view in any image viewer including mspaint. Format is not recognised and errors out. But if I change the mode as below, I can see some image. param. mode JPEGBASELINE What am I looking for: 1. Some help to correct the above code so that the image can be viewed. 2. The best param settings for GRAYSCALE LOSSLESS encoding 3.CanIdirectly attach the 16 bit image buffer to CIppImage without converting to a unsigned char. I know that for this to happen the precesion param should be 16. Thanks in advance. Regards Dhruba It looks the same problem with this post: software. intel/en-us/forums/showthread. phpt84143 we can discuss at that post. DB:2.25:Methods Type Signature Is Not Pinvoke Compatible C cx Hi, I am calling a c function from a C project by using a dll where it returns an IplImage. I have used this to export the function: extern C declspec(dllexport)IplImage showNewImage(IplImage input) and this in C DllImport(dllFile) public static extern MIplImage showNewImage(MIplImage input) Is the way that I have used the dll import wrong the function is supposed to return back an image, the c function is and dll both are working properly but I need help calling it from my C project using the dll. Please Help. Hi Nirmal, i am facing same problem what you have discussed here. I want to know How did you returned IplImage from C function to C As i have included dll using DLLimport. DB:2.23:Image Structure In Ipp 9p I39m a new user of IPP in C, and I would like to know if image processing should be done using IplImage structure and casting to process with IPP functions, or if it is possible to manage all the programming only with IPP8u So the main question is related to: is IPP only usefull for functions (because they are optimized) or also for images structures Thanks in advance for your answer Intel IPP product was designed on base of experience we had with previous version of Performance Libraries (Intel Signal Procesing, Intel Image Processing, Intel Recognition Primitives and Intel JPEG libraries). The previous Performance Libraries defined some domain specific structures and data types which require from application development adoption of their code to these structures, which on practice lead to copying data from application structures to (for example) IPL structures and back. One of the main idea behind the IPP is that we provide low level optimized kernels or building blocks which require minimal adoption on application level. If you look through IPP APIs you will find that IPP functions usually refer to processed data directly by pointer. That provide easy and flexible way for application developer to map IPP functions to particular performance critical parts of the application, without need to adopt application code to IPP data structures. Because IPP provides low level functions you are free in choosing higher level abstraction which might be built on top of IPP. You may choose to build your application on old IPL APIs implemented with IPP functions and we provide code example for IPL API implemented with IPP. Or you may choose to build another higher level of API, like one demostrated in other IPP samples (please check image-processing-functions or UIC samples). To answer your main question, Iwould say thatIPP is useful for performance provided due to optimization and it is also easy to integrate into whatever higher level image processing stack due to low level IPP API. RELEVANCY SCORE 2.23 DB:2.23:Opencv - Cvshowimage - Shows The Image Distorted a9 I have witten a program to read an uchar image, convert it into float, extend border with additional pixels (LTB - Left Top Border, RBB - right bottom border) for further operations (I haven39t included that) and convert resulting float image back to uchar image for showing/saving. When i try to show. save using opencv calls, I get the image distorted as follows: define LTB 30 define RBB 20 void convertdatatypeuchartofloat(IplImage imagesource, IplImage imageconverted) void convertdatatypefloattouchar(IplImage imagesource, IplImage imageconverted) void borderextend(IplImage imagesrc, IplImage imageextended, int lefttopborder, int rightbottomborder) int main() IplImage imagesrc IplImage imageconverted IplImage imageextended IplImage imageshow CvSize sizesrc CvSize sizeshow CvSize sizeextended imagesrc cvLoadImage(quot2012-02-21-190911.jpgquot, CVLOADIMAGEGRAYSCALE) sizesrc cvGetSize(imagesrc) imageconverted cvCreateImage(sizesrc, IPLDEPTH32F, 1) convertdatatypeuchartofloat(imagesrc, imageconverted) sizeextended. width LTB sizesrc. width RBB sizeextended. height LTB sizesrc. height RBB imageextended cvCreateImage(sizeextended, IPLDEPTH32F, 1) borderextend(imageconverted, imageextended, LTB, RBB) sizeshow. width sizeextended. width sizeshow. height sizeextended. height imageshow cvCreateImage(sizeshow, IPLDEPTH8U, 1) convertdatatypefloattouchar(imageextended, imageshow) cvNamedWindow(quotimageextendedquot, 1) cvShowImage(quotimageextendedquot, imageshow) cvReleaseImage(imagesrc) cvReleaseImage(imageconverted) cvReleaseImage(imageextended) cvReleaseImage(imageshow) void convertdatatypeuchartofloat(IplImage imagesource, IplImage imageconverted) int i uchar srcptr float dstptr srcptr (uchar )imagesource-imageData dstptr (float )imageconverted-imageData for (i 0 i (imagesource-width imagesource-height) i) dstptri (float )(srcptri) void convertdatatypefloattouchar(IplImage imagesource, IplImage imageconverted) int i float srcptr uchar dstptr srcptr (float )imagesource-imageData dstptr (uchar )imageconverted-imageData for (i 0 i (imagesource-width imagesource-height) i) dstptri (uchar )(srcptri) void borderextend(IplImage imagesrc, IplImage imageextended, int lefttopborder, int rightbottomborder) ROIextended. x lefttopborder ROIextended. y lefttopborder ROIextended. width imagesrc-width ROIextended. height imagesrc-height cvSetImageROI(imageextended, ROIextended) cvCopyImage(imagesrc, imageextended) cvResetImageROI(imageextended) For particular values of LTB and RBB, I get proper Output (like 10, 10. etc.,). I need help regarding this. Hi Karthikeyan S. The problem is theimage data access is not correct becaueIPL image are stored with 4 bytes. For example, image. width2, theimagewidth will be 4, that is why the widthStep rather than imageWidth ismore used in image processing. forshowing right images, you need to change the two convert functions, for example. void convertdatatypefloattouchar(IplImage imagesource, IplImage imageconverted) int i, j float srcptr uchar dstptr srcptr (float )imagesource-imageData dstptr (uchar )imageconverted-imageData for (i 0 i heighti) for (j0 jwidthj) dstptriimageconverted-widthStep/sizeof(uchar)j (uchar)(srcptriimagesource-widthStep/sizeof(float)j) . The cvCopyImage should take care of the problem internally. This is same when use IPP functions, please see the articleProcessing an Image from Edge to Edge And if OpenCV question, you can ask in OpenCV forum from the link Intel Ipp - Open Source Computer Vision Library (OpenCV) FAQ Hope it helps Ying RELEVANCY SCORE 2.23 DB:2.23:Assemblage Fonction Sobel Et Fonction Threshold Opencv pp Bonjour, Alors voil j39ai deux fonctions que je souhaite assembler pour qu39elles s39executent l39une aprs l39autre, cela dit dans Visual Studio quand je lance le programme seul la premire fonction se lance. Il semblerait qu39il s39agissent d39un probleme de conversion entre image IPL et image Mat mais aprs plusieurs essais je n39arrive toujours pas faire fonctionner la deuxime fonction. Si quelqu39un a une ide il est le bienvenue (je copie le code actuel en dessous) include stdlib. h include StdAfx. h using namespace cv int height, width, step, stepmono, channels /Here i have declared stepmono for handling the widthstep member of a monochrome image/ uchar data, datamono /similarly data mono for handling the data of monochrome image/ int i, j,k //Mat src Mat srcgray Mat grad char windowname Sobel Demo - Simple Edge Detector int scale 1 int delta 0 int ddepth CV16S /// Load an image IplImage framecvLoadImage(D:/Entwicklung/OpenCV/2010/test threshold/Release/sobel. bmp,1) GaussianBlur(src, src, Size(3,3), 0, 0, BORDERDEFAULT ) /// Convert it to gray cvtColor( src, srcgray, CVRGB2GRAY ) /// Create window namedWindow( windowname, CVWINDOWAUTOSIZE ) /// Generate gradx and grady Mat gradx, grady Mat absgradx, absgrady /// Gradient X //Scharr( srcgray, gradx, ddepth, 1, 0, scale, delta, BORDERDEFAULT ) Sobel( srcgray, gradx, ddepth, 1, 0, 3, scale, delta, BORDERDEFAULT ) convertScaleAbs( gradx, absgradx ) /// Gradient Y //Scharr( srcgray, grady, ddepth, 0, 1, scale, delta, BORDERDEFAULT ) Sobel( srcgray, grady, ddepth, 0, 1, 3, scale, delta, BORDERDEFAULT ) convertScaleAbs( grady, absgrady ) /// Total Gradient (approximate) addWeighted( absgradx, 0.5, absgrady, 0.5, 0, grad ) imshow( windowname, grad ) waitKey(0) //Resultat de grad dans IPL. //IplImage monothrescvCreateImage( cvGetSize(frame), 8, 1 ) IplImage frame2(grad) //Faire la fin IplImage monothrescvCreateImage( cvGetSize(frame2), 8, 1 ) height frame2.height /height is a member of IPLIMAGE structure and hence it comes handy like this in such situations, and same goes with below four statements/ width frame2.width step frame2.widthStep stepmono monothres-widthStep channels frame2.nChannels /Number of channels in the image/ data (uchar )frame2.imageData /Image is treated as as unsigned char data hence we use an unsigned char pointer to point to the same/ cvNamedWindow(My Window, CVWINDOWAUTOSIZE ) datamono (uchar )monothres - imageData /data of mono image is handled by the datamono/ for(i0i heighti) for(j0j widthj) /I am copying the first channel from the image in frame in the monochrome image with the help of this line below../ cvThreshold(monothres, monothres,70, /70 is the lower cut off/ 150, /this is the higher cut off/ CVTHRESHBINARY /The type of thresholding, more description in the documentation/ //imshow( windowname, grad ) cvShowImage(My Window, monothres) cvDestroyWindow( My Window ) Tout d39abord merci d39avoi rpondu. Aprs plusieurs heures de recherche j39ai tt compte fait trouver mes erreurs en fait il s39agissait d39une waitkey mal plac et une qui tait en trop ainsi que quelques valeurs modifier, ce n39etait pas un problme de format d39image. Je vous renvoie le code corrig si ca interesse : include stdlib. h include StdAfx. h using namespace cv int height, width, step, stepmono, channels /Here i have declared stepmono for handling the widthstep member of a monochrome image/ uchar data, datamono /similarly data mono for handling the data of monochrome image/ int i, j,k //Mat src Mat srcgray Mat grad char windowname Sobel Demo - Simple Edge Detector int scale 1 int delta 0 int ddepth CV16S /// Load an image IplImage framecvLoadImage(D:/Entwicklung/OpenCV/2010/test threshold/Release/sobel. bmp,1) GaussianBlur(src, src, Size(3,3), 0, 0, BORDERDEFAULT ) /// Convert it to gray cvtColor( src, srcgray, CVRGB2GRAY ) /// Create window namedWindow( windowname, CVWINDOWAUTOSIZE ) /// Generate gradx and grady Mat gradx, grady Mat absgradx, absgrady /// Gradient X //Scharr( srcgray, gradx, ddepth, 1, 0, scale, delta, BORDERDEFAULT ) Sobel( srcgray, gradx, ddepth, 1, 0, 3, scale, delta, BORDERDEFAULT ) convertScaleAbs( gradx, absgradx ) /// Gradient Y //Scharr( srcgray, grady, ddepth, 0, 1, scale, delta, BORDERDEFAULT ) Sobel( srcgray, grady, ddepth, 0, 1, 3, scale, delta, BORDERDEFAULT ) convertScaleAbs( grady, absgrady ) /// Total Gradient (approximate) addWeighted( absgradx, 0.5, absgrady, 0.5, 0, grad ) imshow( windowname, grad ) //waitKey(0) //Resultat de grad dans IPL. //IplImage monothrescvCreateImage( cvGetSize(frame), 8, 1 ) IplImage frame2(grad) /Mat imgMat(frame) grad frame IplImage iplimg grad CvMat cvmat grad/ //Faire la fin IplImage monothrescvCreateImage( cvGetSize(frame2), 8, 1 ) height frame2.height /height is a member of IPLIMAGE structure and hence it comes handy like this in such situations, and same goes with below four statements/ width frame2.width step frame2.widthStep stepmono monothres-widthStep channels frame2.nChannels /Number of channels in the image/ data (uchar )frame2.imageData /Image is treated as as unsigned char data hence we use an unsigned char pointer to point to the same/ cvNamedWindow(My Window, CVWINDOWAUTOSIZE ) datamono (uchar )monothres - imageData /data of mono image is handled by the datamono/ for(i0i heighti) for(j0j widthj) /I am copying the first channel from the image in frame in the monochrome image with the help of this line below../ //cvThreshold(monothres, monothres,70, /70 is the lower cut off/ //150, /this is the higher cut off/ //CVTHRESHBINARY /The type of thresholding, more description in the documentation/ cvThreshold(monothres, monothres,12, /70 is the lower cut off/ 40, /this is the higher cut off/ CVTHRESHBINARY /The type of thresholding, more description in the documentation/ //imshow( windowname, grad ) waitKey(0) cvDestroyWindow( My Window ) DB:2.22:Skeletal Viewer R610 Abort() Has Been Called When Using Opencv Cvshowimage f7 From this thread:social. msdn. microsoft/Forums/en-US/kinectsdknuiapi/thread/7a1a3569-b83b-4bd9-9b73-718c44992df1 I modified the SkeletalViewer to produce a rgb stream with opencv. void CSkeletalViewerApp::NuiGotVideoAlert( ) const NUIIMAGEFRAME pImageFrame NULL IplImage kinectColorImage cvCreateImage(cvSize(640,480),IPLDEPTH8U, 4) HRESULT hr NuiImageStreamGetNextFrame( mpVideoStreamHandle, 0, pImageFrame ) if( FAILED( hr ) ) return NuiImageBuffer pTexture pImageFrame-pFrameTexture KINECTLOCKEDRECT LockedRect pTexture-LockRect( 0, LockedRect, NULL, 0 ) if( LockedRect. Pitch 0 ) BYTE pBuffer (BYTE) LockedRect. pBits mDrawVideo. DrawFrame( (BYTE) pBuffer ) cvSetData(kinectColorImage, (BYTE) pBuffer, kinectColorImage-widthStep) cvShowImage(Color Image, kinectColorImage) cvWaitKey(10) else OutputDebugString( LBuffer length of received texture is bogusrn ) NuiImageStreamReleaseFrame( mpVideoStreamHandle, pImageFrame ) Makes no sense that memory usage starts growing after you set it to NULL on the IplImage, because the data should be released by NuiImageStreamReleaseFrame anyways. Still, if that didnt crash, my guess is that somehow the sdk blocks the memory of the image, and its giving an error when trying to deallocate it. I mean C inteface for OpenCV (2), which uses cv::Mat as a replacement for IplImages and other kinds of matrixes in the C one. RELEVANCY SCORE 2.22 DB:2.22:Iplimage To Picture Control kz While integration of opencv libraries in LabVIEW i had struck with this. I can able to display the processed images in the window handle. i want to display it on the same iplimage(opencv image) to Picture control or LabVIEW Picture in the front panel. How can i do this. Can someone help me to sort out this issue. Esperando su respuesta. Sasi. Certified LabVIEW Associate Developer If you can DREAM it, You can DO it - Walt Disney You can convert IplImage to Bitmap image to show in Control or Convert IplImage to 2D array to display on to the Picture control of labview. May this code will help you . include quotextcode. hquotinclude quotcv. hquot //main OpenCV headerinclude quothighgui. hquot //GUI headerpragma pack(1)/typedef struct TD2 declspec(dllexport) void ColorImageToRGBHexArray( IplImagea, TD1Hdl Pt, er error)declspec(dllexport) void ColorImageToRGBHexArray( IplImagea, TD1Hdl Pt, er error) if (a-nChannels 3) B((uchar )(a-imageData ia-widthStep))ja-nChannels 0 G ((uchar )(a-imageData ia-widthStep))ja-nChannels 1 R ((uchar )(a-imageData ia-widthStep))ja-nChannels 2 (Pt)-eltia-width j R65536 G256 B RELEVANCY SCORE 2.22 DB:2.22:Unable To Encode Yuv4:2:0 Data To Color Jpeg Image 3k I have video streaming (YUV 4:2:0 1920x1080) that comes from hardware. I was able to encode and muxing this video to MPEG2, MPEG4 and H.264 file, but I have the problem to save a single frame as jpeg file. have buffer with raw YUV 4:2:0 video frame (1920x1080) and using a sample code from UIC jpeg I only was able to encode the data as image with GRAY color. I tried to manipulate with different configuration and parameters but the image file has 0 size or if the file doesn39t have 0 bytes size when I try to open the image I receive the message that the file can39t be opened because the file appears to be damaged, corrupted, or is too large. Any ideas what could be wrong tried to use the SaveImageJPEG functions from jpeg. cpp without changes and play around with data that I am passing to it. This is the sample of one of my try code before using SaveImageJPEG function CIppImage data(1920,1080,1,8,0) data. Attach(1920,1080,1,8, YUV420buffer, 0) data. Color(ICRGB) data. NChannels(1) data. Format(IFFIXED) dataponentOrder(0) data. Sampling(IS422) PARAMSJPEG params params. nthreads 1 params. color ICRGB params. huffmanopt 1 params. mode JPEGBASELINE params. pointtransform 0 params. predictor 1 params. quality 100 params. sampling IS422 params. dctscale JD18 params. useqdct 1 params. tmode 1 params. restartinterval 1 paramsmentsize sizeof(params. ment) Thank you in advance, Sergey The UIC JPEG encoder will access image data in memory in the way which depends on how you describe the memory buffer. For the same image width and height 3channel image will take less memory than 4-channel image. If you mistakenly describe you image format then encoder may not correctly access data (that seems to be reason for throubles with 411 sampling you mentioned). Note, UIC framework assume that image with Gray color will take single color channel and image with single color channel can39t be subsampled neither with 411 sampling factors nor 422 sampling factors. In order to properly use UIC JPEG encoder you need to correctly specify actual parameters of your input (uncompressed) image and also specify desired parameters of your output (compressed) image. I would recommend you to play a little bit with UIC picnic application, where you can see what options are available for JPEG encoder and how they affect resulting image. RELEVANCY SCORE 2.22 DB:2.22:Come Across ExceptionAccessViolation When Wrapping The Opencv Facedetect mc Hi all. I am trying to simply wrap the facedetect C code to Java. I actually succeed in running the C code alone and build the dll file successfully. But when I am wrapping the C code to Java, The EXCEPTIONACCESSVIOLATION (0xc0000005) at pc0x00009548, pid2276, tid2564 error just occurs. I debugged the program line by line, it suggested that cvLoadImage and cvHaarDetectObjects cannot be accepted by the JVM. Could anyone tell what39s wrong with the original C code How can I modify it Is it because of the null pointer Here is my C code. // OpenCV Sample Application: facedetect. c // Include header files include quotcxcore. hquot include quothighgui. hquot include stdio. h include stdlib. h include string. h include assert. h include math. h include float. h include limits. h include time. h include ctype. h // Main function, defines the entry point for the program. JNIEXPORT void JNICALL JavalirefaceMaintest (JNIEnv env, jobject obj) CvMemStorage storage 0 CvHaarClassifierCascade cascade 0 CvSeq faces // Structure for getting video from camera or avi CvCapture capture 0 const char cascadename // Input file name for avi or image file. // const char inputname cascadename quote:/OpenCV/data/haarcascades/haarcascadefrontalfacealt. xmlquot // inputname argc 2. argv2. 0 // Load the HaarClassifierCascade cascade (CvHaarClassifierCascade)cvLoad( cascadename, 0, 0, 0 ) // Check whether the cascade has loaded successfully. Else report and error and quit // Allocate the memory storage storage cvCreateMemStorage(0) char aquote:/lena. jpgquot // Create a new named window with title: result cvNamedWindow( quotresultquot, 1 ) // Assume the image to be lena. jpg, or the inputname specified const char filename a // Load the image from that filename IplImage image cvLoadImage(filename, 1) // IplImage image1 cvCreateImage( cvSize(image-width, image-height), 8, 3 ) // image1-imageDataimage-imageData // If Image is loaded succesfully, then: if( image ) int scale 1 CvRect rect // Create a new image based on the input image // IplImage temp cvCreateImage( cvSize(image-width/scale, image-height/scale), 8, 3 ) // Create two points to represent the face locations CvPoint pt1, pt2 int i // Clear the memory storage which was used before cvClearMemStorage( storage ) // Find whether the cascade is loaded, to find the faces. If yes, then: if( cascade ) // There can be more than one face in an image. So create a growable sequence of faces. // Detect the objects and store them in the sequence faces cvHaarDetectObjects( image, cascade, storage, 1.1, 2, CVHAARDOCANNYPRUNING, cvSize(40, 40) ) // Loop the number of faces found. for( i 0 i (faces. faces-total. 0) i ) // Create a new rectangle for drawing the face CvRect r (CvRect)cvGetSeqElem( faces, i ) // Find the dimensions of the face, and scale it if necessary pt1.x r-xscale pt2.x (r-xr-width)scale pt1.y r-yscale pt2.y (r-yr-height)scale // Draw the rectangle in the input image cvRectangle( image, pt1, pt2, CVRGB(255, 0, 0), 3, 8, 0 ) rect r0 // cvResetImageROI(img) // vRect cvGetImageROI(img) // if(rect) // printf(quotalibaba nquot) // Show the image in the window named quotresultquot // cvShowImage( quotresultquot, image ) // Release the temp image created. // cvReleaseImage( temp ) // cvSaveImage(quote:/NetBeansProjects/Lire2901/alibaba. jpgquot, image) // cvSetImageROI(image1, rect) // cvSaveImage(quote:/NetBeansProjects/Lire2901/alimama. jpgquot, image1) // IplImage temp cvCreateImage( cvSize(image-width/scale, image-height/scale), image-depth, image-nChannels ) cvShowImage(quotresultquot, image) // Wait for user input cvWaitKey(0) // Release the image memory cvReleaseImage( image ) cvDestroyWindow(quotresultquot) // cvReleaseImage( temp ) // cvSetImageROI(image1, rect) int xfaces-total if(x0) return 1 else return 0 And the Java code is also listed here: / To change this template, choose Tools Templates and open the template in the editor. / package lireface import java. awt. image. BufferedImage import java. URL import javax. imageio. ImageIO import java. io. IOException import java. awt. image. WritableRaster public class Main static System. load(quote:NetBeansProjectsfacedistface. dllquot) public static void main(String args) throws IOException new Main().test() public static native void test() I am looking forward to your help. Gracias. Hola a todos. I am trying to simply wrap the facedetect C code to Java. I actually succeed in running the C code alone and build the dll file successfully. But when I am wrapping the C code to Java, The EXCEPTIONACCESSVIOLATION (0xc0000005) at pc0x00009548, pid2276, tid2564 error just occurs. I debugged the program line by line, it suggested that cvLoadImage and cvHaarDetectObjects cannot be accepted by the JVM. Could anyone tell what39s wrong with the original C code How can I modify it Is it because of the null pointer Here is my C code. // OpenCV Sample Application: facedetect. c // Include header files include quotcxcore. hquot include quothighgui. hquot include stdio. h include stdlib. h include string. h include assert. h include math. h include float. h include limits. h include time. h include ctype. h // Main function, defines the entry point for the program. JNIEXPORT void JNICALL JavalirefaceMaintest (JNIEnv env, jobject obj) CvMemStorage storage 0 CvHaarClassifierCascade cascade 0 CvSeq faces // Structure for getting video from camera or avi CvCapture capture 0 const char cascadename // Input file name for avi or image file. // const char inputname cascadename quote:/OpenCV/data/haarcascades/haarcascadefrontalfacealt. xmlquot // inputname argc 2. argv2. 0 // Load the HaarClassifierCascade cascade (CvHaarClassifierCascade)cvLoad( cascadename, 0, 0, 0 ) // Check whether the cascade has loaded successfully. Else report and error and quit // Allocate the memory storage storage cvCreateMemStorage(0) char aquote:/lena. jpgquot // Create a new named window with title: result cvNamedWindow( quotresultquot, 1 ) // Assume the image to be lena. jpg, or the inputname specified const char filename a // Load the image from that filename IplImage image cvLoadImage(filename, 1) // IplImage image1 cvCreateImage( cvSize(image-width, image-height), 8, 3 ) // image1-imageDataimage-imageData // If Image is loaded succesfully, then: if( image ) int scale 1 CvRect rect // Create a new image based on the input image // IplImage temp cvCreateImage( cvSize(image-width/scale, image-height/scale), 8, 3 ) // Create two points to represent the face locations CvPoint pt1, pt2 int i // Clear the memory storage which was used before cvClearMemStorage( storage ) // Find whether the cascade is loaded, to find the faces. If yes, then: if( cascade ) // There can be more than one face in an image. So create a growable sequence of faces. // Detect the objects and store them in the sequence faces cvHaarDetectObjects( image, cascade, storage, 1.1, 2, CVHAARDOCANNYPRUNING, cvSize(40, 40) ) // Loop the number of faces found. for( i 0 i (faces. faces-total. 0) i ) // Create a new rectangle for drawing the face CvRect r (CvRect)cvGetSeqElem( faces, i ) // Find the dimensions of the face, and scale it if necessary pt1.x r-xscale pt2.x (r-xr-width)scale pt1.y r-yscale pt2.y (r-yr-height)scale // Draw the rectangle in the input image cvRectangle( image, pt1, pt2, CVRGB(255, 0, 0), 3, 8, 0 ) rect r0 // cvResetImageROI(img) // vRect cvGetImageROI(img) // if(rect) // printf(quotalibaba nquot) // Show the image in the window named quotresultquot // cvShowImage( quotresultquot, image ) // Release the temp image created. // cvReleaseImage( temp ) // cvSaveImage(quote:/NetBeansProjects/Lire2901/alibaba. jpgquot, image) // cvSetImageROI(image1, rect) // cvSaveImage(quote:/NetBeansProjects/Lire2901/alimama. jpgquot, image1) // IplImage temp cvCreateImage( cvSize(image-width/scale, image-height/scale), image-depth, image-nChannels ) cvShowImage(quotresultquot, image) // Wait for user input cvWaitKey(0) // Release the image memory cvReleaseImage( image ) cvDestroyWindow(quotresultquot) // cvReleaseImage( temp ) // cvSetImageROI(image1, rect) int xfaces-total if(x0) return 1 else return 0 And the Java code is also listed here: / To change this template, choose Tools Templates and open the template in the editor. / package lireface import java. awt. image. BufferedImage import java. URL import javax. imageio. ImageIO import java. io. IOException import java. awt. image. WritableRaster public class Main static System. load(quote:NetBeansProjectsfacedistface. dllquot) public static void main(String args) throws IOException new Main().test() public static native void test() I am looking forward to your help. Gracias. DB:2.22:How To Obtain X-Position And Y-Position For All Contours Of The Following Code zd /include stdafx. h include windows. h include cv. h include highgui. h int main( int argc, char argv ) IplImage src srccvLoadImage(01.bmp, 0) IplImage dst cvCreateImage( cvGetSize(src), 8, 3 ) CvMemStorage storage cvCreateMemStorage(0) CvSeq contour 0 cvThreshold( src, src, 1, 255, CVTHRESHBINARY ) cvNamedWindow( Source, 1 ) cvShowImage( Source, src ) cvFindContours( src, storage, contour, sizeof(CvContour), CVRETRCCOMP, CVCHAINAPPROXSIMPLE ) cvZero( dst ) for( contour 0 contour contour-hnext ) CvScalar color CVRGB( rand()255, rand()255, rand()255 ) / replace CVFILLED with 1 to see the outlines / cvDrawContours( dst, contour, color, color, -1, CVFILLED, 8 ) cvNamedWindow( Components, 1 ) cvShowImage( Components, dst ) cvWaitKey(0) // Hello, I think your issue should be raised in the OpenCV support forum. I believe they will know more information of this issue than us, and I will move this one to off-topic, Please open a new thread in that forum. Thanks for your understanding, Best regards, JesseJesse Jiang MSFT MSDN Community Support Feedback to us RELEVANCY SCORE 2.22 SaveImageJPEG(imageIn, param, foJPEG) Result: JPEG Image has generated but unable to view in any image viewer including mspaint. Format is not recognised and errors out. But if I change the mode as below, I can see some image. What am I looking for: 1. Some help to correct the above code so that the image can be viewed. 2. The best param settings for GRAYSCALE LOSSLESS encoding 3. Can I directly attach the 16 bit image buffer to CIppImage without converting to a unsigned char. I know that for this to happen the precesion param should be 16. Thanks in advance. IPP JPEG supportsfollowing compression modes defined by JPEG ISO/IEC 10918: 1. Baseline, 8-bits, DCT based process, huffman entropy coding 2. Extended baseline, 8- and 12-bits, DCT based process, huffman entropy coding 3. Lossless, 1..16 bits, prediction based, huffman entropy coding Most of widely available image viewers and editors only support Baseline and Extended Baseline modes with 8-bits per color component. To test lossless image created by IPP JPEG encoder you need to look for a specialized software (frequently medical related image vievers do support lossless compression modes as it required by DICOM specification) RELEVANCY SCORE 2.21 DB:2.21:Opencv Default Colorspace c1 I have an urgent question regarding cvQueryFrame(). I couldn39t find the answer anywhere. So I hope someone here can help. If I read a video which is encoded in e. g. YUV4:2:2 color format in OpenCV by: cap cvCreateFileCapture(quotinput. aviquot) IplImage in cvQueryFrame(cap) Will in then always be in the BGR colorspace Or will in be in the YUV4:4:4 colorspace Will OpenCV in cvQueryFrame() automatically convert any video of any colorspace into the BGR colorspace by default Or if the encoded video is YUV, will the colorspace after cvQueryFrame() than also be YUV Thanks in advance Have you tried the discussion forum on OpenCV They should be able to help you. RELEVANCY SCORE 2.21 DB:2.21:Does Uic Support Partially Decode For Ultra-High Resolution Jpeg. 9j I39m trying to decode a 8k x 6k JPEG in 16bits color depth and then make the color channels from RGB to BGRA, but I can NOT allocate a large continuous memory space for it, it39s about 366 MB 274 MB(RGB,16bit)92 MB(A,16bits). And I want to divide them into seperated memory blocks inside the process(x32) virtual space. I knew and tried that WIC(window image component) has the ability to decode image by specifiying a WICRect (x, y,width, height). So I traced the UIC source code (uicjpegdec. cpp, jpeg. cpp, jpegdec. cpp, etc. ), It looks like that I can do what I want by setting a ROI information in ImageSamplingGeometry. Sad thing was, it failed for me. No matter how I config the origin in geometry. RefGridRect(), I can only got the result buffer starting from (0, 0) even I set the origin to (0, 1000) for example. Here39s my calling procedure. (To decode 8k x 6k jpeg) 1. I allocate 8k 1k memory space (so in this case, I will need to do ROI decode six times for different y position) 2. Attach this memory block to a CIppImage and pass this CIppImage instance to ReadImageJPEG(BaseStreamInput in, PARAMSJPEG param, CIppImage image) in jpeg. cpp 3. Set the target decode ROI information (0, 1000, 8k, 1k) to a (ImageSamplingGeometry)variable, called geometry. 4. Then attached the input buffer(CIppImage) to a (UIC. Image) imageCn by calling this imageCn. Buffer().Attach(dataPtr, dataOrder, geometry) 5. And jpegdec. ReadData(imageCn. Buffer().DataPtr(),dataOrder) is performed. 1. Does UIC support ROI decode I didn39t found any code refering to the input origin. 2. If UIC supports, what do I miss in the procedure above Great thanks in advance. Yes, UIC by design is not capable to process encoding/decoding using ROIs (or slices). It may the problem, because the size of images is growing quickly. We39ll look of what can be done in this direction. 16-bit pixels in UIC can be used in JPEG lossless mode only. How can I convert MKV to MP4 file Among all the MKV to MP4 converters, iDealshare VideoGo is highly recommended for the following features. As the ideal MKV to MP4 Converter Mac version, iDealshare VideoGo can convert MKV to MP4 on Mac OS X Yosemite, Mavericks, Mountain Lion, Lion, Snow Leopard, Leopard, and Tiger. As the ideal MKV to MP4 Converter Windows version, iDealshare VideoGo can convert MKV to MP4 on Windows 10, Windows 8, Windows 7, Windows XP, Windows Vista. Batch converting MKV to MP4 is supportedBesides converting MKV to MP4 video format, it also can convert MKV to MOV, AVI, WMV, DV, FLV, VOB, MPEG-1, MPEG-1, RMVB, WTV, ASF, TS, and etc. This professional MKV to MP4 Video Converter also can convert MKV to MP3, WAV, AAC, AIFF, Apple Lossless ALAC, M4A, AC3, RA, AU, DTS, MP2, FLAC, OGG, audio format. Convert MKV with DTS to MKV with AC3, MP3, AAC convert MKV with DTS to MP4 with AAC, AC3 or MP3Moreover, iDealshare VideoGo can also convert TS, AVCHD, MTS, M2TS, XAVC, MXF, TP, DAV, FLV, WebM, MOD, TOD, AVI, WMV, ASF, SWF, MOV, etc to MP4. RELEVANCY SCORE 2.21 DB:2.21:Problem With Use Of Ipp7.1 amp Opencv 2.4.1 s9 Please say to me what steps I should follow for installing ipp7.1 on windows and use it in OpenCV2.4.2. I downloaded Ipp7.1 evaluation version and I used CMake 2.8 then I configured static OpenCV in CMake and Built all project in vs2008 without any problem. For the static project I appended the following list for OpenCV 3rdparty: opencvcalib3d241.lib opencvcontrib241.lib opencvcore241.lib opencvfeatures2d241.lib opencvflann241.lib opencvgpu241.lib opencvhighgui241.lib opencvimgproc241.lib opencvlegacy241.lib opencvml241.lib opencvnonfree241.lib opencvobjdetect241.lib opencvphoto241.lib opencvstitching241.lib opencvts241.lib opencvvideo241.lib opencvvideostab241.lib libjasper. lib libjasperd. lib libjpeg. lib libjpegd. lib libpng. lib libpngd. lib libtiff. lib libtiffd. lib zlib. lib zlibd. lib user32.lib And then appended the following list for IPP static library: ippacl. lib ippccl. lib ippchl. lib ippcorel. lib ippcvl. lib ippdcl. lib ippdil. lib ippil. lib ippjl. lib ippml. lib ipprl. lib ippscl. lib ippsl. lib ippvcl. lib ippvml. lib My project compiled without any problem. I used the following code for a way to make sure that IPP is installed and working correctly. This function has 2 input arguments. First one is quotopencvlibquot and it will be filled by the version of OpenCV. But my problem is with the second parameter. quotaddmodulesquot is always empty. const char opencvlib 0 const char addmodules 0 printf(quott opencvlib s, nt addmodules snnquot, opencvlib, addmodules) There is another problem too which I believe it refers to the previous problem. In the following code I39ve used cvUseOptimized(1) and cvUseOptimized(0) before a same loop code. But the odd point is that the proccessing time is actually equall for both double t1, t2,timeCalc IplImage templateimage cvLoadImage (quotc:box. pngquot,0) IplImage convertedimage cvLoadImage (quotc:boxinscene. pngquot,0) CvSize cvsrcSize cvGetSize(convertedimage) cout quot image match template using OpenCV cvMatchTemplate() quot endl IplImage imagencc, resultncc imagencc cvCreateImage(cvsrcSize,8,1) memcpy(imagencc-imageData, convertedimage-imageData, convertedimage-imageSize) resultncc cvCreateImage(cvSize(convertedimage-width - templateimage-width1, convertedimage-height-templateimage-height1),IPLDEPTH32F,1) int NumUploadedFunction cvUseOptimized(1) t1 (double)cvGetTickCount() for (int j0jLOOPj) cvMatchTemplate(imagencc, templateimage, resultncc, CVTMCCORRNORMED) t2 (double)cvGetTickCount() timeCalc(t2-t1)/((double)cvGetTickFrequency()1000. 1000.0) cout quot OpenCV matchtemplate using cross-correlation Valid: quot timeCalc endl NumUploadedFunction cvUseOptimized(0) t1 (double)cvGetTickCount() for (int j0jLOOPj) cvMatchTemplate(imagencc, templateimage, resultncc, CVTMCCORRNORMED) t2 (double)cvGetTickCount() timeCalc(t2-t1)/((double)cvGetTickFrequency()1000. 1000.0) cout quot OpenCV matchtemplate using cross-correlation Valid: quot timeCalc endl Also, please take a look at a similar very old thread on IPP forum: DB:2.21:Simple Opencv Program Freezes My Pc xk Im trying to do a simple opencv program, but it freezes my pc when I run it. I just cant figure out what is causing this. The program is: include opencv/cv. h include opencv/highgui. h int main(int argc, char argv) int i 10000 cvNamedWindow(img, CVWINDOWAUTOSIZE) CvCapture capture cvCreateCameraCapture(-1) IplImage img cvSetCaptureProperty(capture, CVCAPPROPFRAMEWIDTH, 320) while(i--) img cvQueryFrame(capture) if (img) break cvShowImage(img, img) It is maybe a driver issue. Did you check whether your webcam has specific drivers or is supportede. g..ideasonboard. org/uvc/ RELEVANCY SCORE 2.21 DB:2.21:Jpeg2000 To Dxt5n mc Hello, I have a normal map saved as a JP2. Using the UIC codecs and the CIPPImage class, I want to load in the JP2, then convert it to a DXT5, but with the source red channel placed in the destination alpha channel. Here is my code: CStdFileInput inStreaminStream. Open(quottest. jp2quot)PARAMSJPEG2K paramsZeroMemory(params, sizeof(PARAMSJPEG2K))params. nthreads 2 CIppImage image ReadImageJPEG2000(inStream, params, image)CStdFileOutput outStreamioutStream. Open(quottest. ddsquot)CIppImage alphaImageimage. ToRGBA32(alphaImage)int dxt5Order4 alphaImage. SwapChannels(dxt5Order)PARAMSDDS ddsParamsZeroMemory(ddsParams, sizeof(ddsParams))ddsParams. ac 1ddsParams. nthreads 2ddsParams. fmt DDSDXT5SaveImageDDS(alphaImage, ddsParams, outStream) The resulting image is a gray-scale texture and a blank alpha channel. I get the same result whether or not I call SwapChannels on alphaImage, which leads me to believe that SaveImageDDS isn39t handling something properly with 4 channel textures, or toRGBA32 is bugged. That or I39m doing something wrong, which is more likely :)Does anyone see something is wrong w/ my approachThank you, Eric Thanks Vladimir. I was confused as to what the. ac meant. Setting it to zero fixed the issue. Eric RELEVANCY SCORE 2.21 DB:2.21:Runtime Error While Copying Data With Pointers (Vc, Win7) dx Hello, I39m trying to copy data from a TIFF image into an OpenCV IplImage. I have also posted this question to the OpenCV forums, but I feel like I must be missing some pointer basics, and so have posted here: First, code for loading the TIFF, straightforward: void CProjectorTabCtrl::OnBnClickedButtoncalibrate() TIFF currentTif IplImage in NULL int tiffW CFileDialog dlg(TRUE, T(.tif). OFNFILEMUSTEXISTOFNPATHMUSTEXISTOFNHIDEREADONLY, TIF images (.tif).tif, NULL) dlg. mofn. lpstrTitle Load an TIFF Image if (dlg. DoModal() IDOK) LPCSTR mfilename dlg. GetPathName() currentTif TIFFOpen(mfilename, r) TIFFGetField(currentTif, TIFFTAGIMAGEWIDTH, tiffW) // get image width CString errorString TEXT() errorString. Format(TEXT(d), tiffW) MessageBox(errorString, TIFF width is non-zero. 0) // TIFF is good in TIFFtoIpl(currentTif) else MessageBox(Did not get a valid image. Did not get a valid image,0) Hello, I did forget to allocate the buffer, thank you for the tip. Best, --Phu RELEVANCY SCORE 2.21 DB:2.21:Link Errors In Image Codecs m7 I am used Intel IPP version 6.0.167 I upgraded to the Intel IPP Composer XE 2011 SP1 and ipp-samplesp7.0.7.064. My machine OS in Win7 32-bit my DirectX version is 11. I wrote simple tester that make siple wrapper for the jpeg2000 (MotionJPEG2000dec. lib) and when I compile 39 39 it I got this link errors: Error 3 error LNK2001: unresolved external symbol quotpublic: int thiscall CIppImage::Alloc(struct IppiSize, int, int, int)quot (AllocCIppImageQAEHUIppiSizeHHHZ) MotionJPEG2000dec. lib Error 4 error LNK2001: unresolved external symbol quotpublic: virtual thiscall CIppImage:: CIppImage(void)quot (1CIppImageUAEXZ) MotionJPEG2000dec. lib Error 5 error LNK2001: unresolved external symbol quotpublic: thiscall CIppImage::CIppImage(void)quot (0CIppImageQAEXZ) MotionJPEG2000dec. lib I added to my code : pragma comment(lib, quotuictranscodercon. libquot) pragma comment(lib, quotuiccore. libquot) pragma comment(lib, quotuicbmp. libquot) pragma comment(lib, quotuicpnm. libquot) What is missing. Can I compile the image codecs to make static libs insted of dynamic ones Genady, I see that this is a duplicate. Please continue posting into your 1st thread created last year. RELEVANCY SCORE 2.21 DB:2.21:Just-In-Time Debugging, Visual Studio. k9 hi.. i39m using Intel TBB 2.1 with OpenCV and Microsoft Visual Studio 2008 express edition. while implementing parallelfor function, i get a run-time error that says: An unhandled win32 exception occurred. 5556. Just-in-Time debugging this exception failed with the following error: No installed debugger has just-in-time debugging enabled. express edition doesnt support JIT debugging. is it essential to have a JIT debugger to use TBB also, why do you think the error occurs this is the code snippet thats causing the problem: ////////////////////////////////////////////////////////////////////////// class ApplyFoo IplImage const mya public: void operator()( const blockedrange r ) const IplImage a mya for( sizet ir. begin() ir. end() i ) DCTscramenc(ai) ApplyFoo( IplImage a ). mya(a) the error pops-up as soon as there is a call to the ParallelApplyFoo function: ParallelApplyFoo(image,4) (where image is of Ip can someone help me sort this out thanks and regards, sindhura

No comments:

Post a Comment