2018/5/18 15:02:20当前位置推荐好文程序员浏览文章

前言

因需求需要,需要实现人脸检测功可以,这次正好将这个功可以整理了一下,简单的写了一个Demo。代码有点乱,不过,也不怎样想花时间去改了,感觉层次方面还算比较清晰的,好了,进入正题。

假如有更好的实现策略和我探讨或者者想拿源码和思维导图资料的, 请联络我时,备注一下CoreImage实现人脸识别-iOS
加我技术交流QQ群:656315826

一、导入框架,实现自己设置相机

1. 导入框架

#import <AVFoundation/AVFoundation.h>#import <CoreImage/CoreImage.h>

2. 实现自己设置相机

  • 初始化相机
#pragma mark - 初始化相机- (void)getCameraSession{    //初始化会话    _captureSession=[[AVCaptureSession alloc]init];    if ([_captureSession canSetSessionPreset:AVCaptureSessionPreset1280x720]) {//设置分辨率        _captureSession.sessionPreset = AVCaptureSessionPreset1280x720;    }    //取得输入设施    AVCaptureDevice captureDevice=[self getCameraDeviceWithPosition:AVCaptureDevicePositionFront];//获得前置摄像头    if (!captureDevice) {        NSLog(@"获得前置摄像头时出现问题.");        return;    }    NSError error=nil;    //根据输入设施初始化设施输入对象,使用于取得输入数据    _captureDeviceInput=[[AVCaptureDeviceInput alloc]initWithDevice:captureDevice error:&error];    if (error) {        NSLog(@"获得设施输入对象时出错,错误起因:%@",error.localizedDescription);        return;    }    [_captureSession addInput:_captureDeviceInput];    //初始化设施输出对象,使用于取得输出数据    _captureStillImageOutput=[[AVCaptureStillImageOutput alloc]init];    NSDictionary outputSettings = @{AVVideoCodecKey:AVVideoCodecJPEG};    [_captureStillImageOutput setOutputSettings:outputSettings];//输出设置    //将设施输入增加到会话中    if ([_captureSession canAddInput:_captureDeviceInput]) {        [_captureSession addInput:_captureDeviceInput];    }    //将设施输出增加到会话中    if ([_captureSession canAddOutput:_captureStillImageOutput]) {        [_captureSession addOutput:_captureStillImageOutput];    }    //创立视频预览层,使用于实时展现摄像头状态    _captureVideoPreviewLayer=[[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession];    CALayer layer=self.videoMainView.layer;    layer.masksToBounds=YES;    _captureVideoPreviewLayer.frame=layer.bounds;    _captureVideoPreviewLayer.videoGravity=AVLayerVideoGravityResizeAspectFill;//填充模式    //将视频预览层增加到界面中    [layer addSublayer:_captureVideoPreviewLayer];    [layer insertSublayer:_captureVideoPreviewLayer below:self.focusCursor.layer];}

三、获取相机数据流

由于我需要动态进行人脸识别,所以需要启使用数据流,在这里需要设置并遵守代理商

// 遵守代理商<AVCaptureVideoDataOutputSampleBufferDelegate>
    AVCaptureVideoDataOutput captureOutput = [[AVCaptureVideoDataOutput alloc] init];    captureOutput.alwaysDiscardsLateVideoFrames = YES;    dispatch_queue_t queue;    queue = dispatch_queue_create("myQueue", DISPATCH_QUEUE_SERIAL);    [captureOutput setSampleBufferDelegate:self queue:queue];    NSString key = (NSString )kCVPixelBufferPixelFormatTypeKey;    NSNumber value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];    NSDictionary settings = @{key:value};    [captureOutput setVideoSettings:settings];    [self.captureSession addOutput:captureOutput];

四、实现相机数据流的代理商方法

#pragma mark - Samle Buffer Delegate// 抽样缓存写入时所调使用的委托程序- (void)captureOutput:(AVCaptureOutput )captureOutputdidOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer       fromConnection:(AVCaptureConnection )connection{}// 这个方法是将数据流的帧转换成图片//在该代理商方法中,sampleBuffer是一个Core Media对象,能引入Core Video供用// 通过抽样缓存数据创立一个UIImage对象- (UIImage )imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);    CIImage ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];    CIContext temporaryContext = [CIContext contextWithOptions:nil];    CGImageRef videoImage = [temporaryContext createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer))];    UIImage result = [[UIImage alloc] initWithCGImage:videoImage scale:1.0 orientation:UIImageOrientationLeftMirrored];    CGImageRelease(videoImage);    return result;}

五、对图片进行解决

在这里需要说明一下,由于上面的方法转换出来的图片都是反过来的,所以需要再转一下

/   使用来解决图片翻转90度    @param aImage    @return /- (UIImage )fixOrientation:(UIImage )aImage{    // No-op if the orientation is already correct    if (aImage.imageOrientation == UIImageOrientationUp)        return aImage;    CGAffineTransform transform = CGAffineTransformIdentity;    switch (aImage.imageOrientation) {        case UIImageOrientationDown:        case UIImageOrientationDownMirrored:            transform = CGAffineTransformTranslate(transform, aImage.size.width, aImage.size.height);            transform = CGAffineTransformRotate(transform, M_PI);            break;        case UIImageOrientationLeft:        case UIImageOrientationLeftMirrored:            transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);            transform = CGAffineTransformRotate(transform, M_PI_2);            break;        case UIImageOrientationRight:        case UIImageOrientationRightMirrored:            transform = CGAffineTransformTranslate(transform, 0, aImage.size.height);            transform = CGAffineTransformRotate(transform, -M_PI_2);            break;        default:            break;    }    switch (aImage.imageOrientation) {        case UIImageOrientationUpMirrored:        case UIImageOrientationDownMirrored:            transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);            transform = CGAffineTransformScale(transform, -1, 1);            break;        case UIImageOrientationLeftMirrored:        case UIImageOrientationRightMirrored:            transform = CGAffineTransformTranslate(transform, aImage.size.height, 0);            transform = CGAffineTransformScale(transform, -1, 1);            break;        default:            break;    }    // Now we draw the underlying CGImage into a new context, applying the transform    // calculated above.    CGContextRef ctx = CGBitmapContextCreate(NULL, aImage.size.width, aImage.size.height,                                             CGImageGetBitsPerComponent(aImage.CGImage), 0,                                             CGImageGetColorSpace(aImage.CGImage),                                             CGImageGetBitmapInfo(aImage.CGImage));    CGContextConcatCTM(ctx, transform);    switch (aImage.imageOrientation) {        case UIImageOrientationLeft:        case UIImageOrientationLeftMirrored:        case UIImageOrientationRight:        case UIImageOrientationRightMirrored:            // Grr...            CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.height,aImage.size.width), aImage.CGImage);            break;        default:            CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.width,aImage.size.height), aImage.CGImage);            break;    }    // now we just create a new UIImage from the drawing context    CGImageRef cgimg = CGBitmapContextCreateImage(ctx);    UIImage img = [UIImage imageWithCGImage:cgimg];    CGContextRelease(ctx);    CGImageRelease(cgimg);    return img;}

六、利使用CoreImage中的detectFace进行人脸检测

/识别脸部/-(NSArray )detectFaceWithImage:(UIImage )faceImag{    //此处是CIDetectorAccuracyHigh,若使用于real-time的人脸检测,则使用CIDetectorAccuracyLow,更快    CIDetector faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace                                                  context:nil                                                  options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];    CIImage ciimg = [CIImage imageWithCGImage:faceImag.CGImage];    NSArray features = [faceDetector featuresInImage:ciimg];    return features;}

总结

Demo源码

想要获取案例源码能加我技术交流群:656315826,亦或者者对
我写的有些疑问也能在群里联络我,我看到会第一时间回复的

我的思路是将相机里获取的数据,通过代理商方法,将帧转换成每一张图片,拿到图片,去实现人脸识别。功可以没问题,但是很耗性可以,但是暂时我不太清楚还有什么好的方法来实现,假如有什么好的方法,也能留言告诉我,感谢!


网友评论