amazonka-rekognition-1.4.5: Amazon Rekognition SDK.

Copyright(c) 2013-2016 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone
LanguageHaskell2010

Network.AWS.Rekognition.DetectFaces

Contents

Description

Detects faces within an image (JPEG or PNG) that is provided as input.

For each face detected, the operation returns face details including a bounding box of the face, a confidence value (that the bounding box contains a face), and a fixed set of attributes such as facial landmarks (for example, coordinates of eye and mouth), gender, presence of beard, sunglasses, etc.

The face-detection algorithm is most effective on frontal faces. For non-frontal or obscured faces, the algorithm may not detect the faces or might detect faces with lower confidence.

For an example, see 'get-started-exercise-detect-faces' .

This operation requires permissions to perform the rekognition:DetectFaces action.

Synopsis

Creating a Request

detectFaces #

Arguments

:: Image

dfImage

-> DetectFaces 

Creates a value of DetectFaces with the minimum fields required to make a request.

Use one of the following lenses to modify other fields as desired:

  • dfAttributes - A list of facial attributes you would like to be returned. By default, the API returns subset of facial attributes. For example, you can specify the value as, [ALL] or [DEFAULT]. If you provide both, [ALL, DEFAULT], the service uses a logical AND operator to determine which attributes to return (in this case, it is all attributes). If you specify all attributes, Rekognition performs additional detection.
  • dfImage - The image in which you want to detect faces. You can specify a blob or an S3 object.

data DetectFaces #

See: detectFaces smart constructor.

Instances

Eq DetectFaces # 
Data DetectFaces # 

Methods

gfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> DetectFaces -> c DetectFaces #

gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c DetectFaces #

toConstr :: DetectFaces -> Constr #

dataTypeOf :: DetectFaces -> DataType #

dataCast1 :: Typeable (* -> *) t => (forall d. Data d => c (t d)) -> Maybe (c DetectFaces) #

dataCast2 :: Typeable (* -> * -> *) t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c DetectFaces) #

gmapT :: (forall b. Data b => b -> b) -> DetectFaces -> DetectFaces #

gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> DetectFaces -> r #

gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> DetectFaces -> r #

gmapQ :: (forall d. Data d => d -> u) -> DetectFaces -> [u] #

gmapQi :: Int -> (forall d. Data d => d -> u) -> DetectFaces -> u #

gmapM :: Monad m => (forall d. Data d => d -> m d) -> DetectFaces -> m DetectFaces #

gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> DetectFaces -> m DetectFaces #

gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> DetectFaces -> m DetectFaces #

Read DetectFaces # 
Show DetectFaces # 
Generic DetectFaces # 

Associated Types

type Rep DetectFaces :: * -> * #

Hashable DetectFaces # 
ToJSON DetectFaces # 
NFData DetectFaces # 

Methods

rnf :: DetectFaces -> () #

AWSRequest DetectFaces # 
ToQuery DetectFaces # 
ToPath DetectFaces # 
ToHeaders DetectFaces # 

Methods

toHeaders :: DetectFaces -> [Header] #

type Rep DetectFaces # 
type Rep DetectFaces = D1 (MetaData "DetectFaces" "Network.AWS.Rekognition.DetectFaces" "amazonka-rekognition-1.4.5-7kQQXfpD3s0BAUEtxNCePO" False) (C1 (MetaCons "DetectFaces'" PrefixI True) ((:*:) (S1 (MetaSel (Just Symbol "_dfAttributes") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe [Attribute]))) (S1 (MetaSel (Just Symbol "_dfImage") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 Image))))
type Rs DetectFaces # 

Request Lenses

dfAttributes :: Lens' DetectFaces [Attribute] #

A list of facial attributes you would like to be returned. By default, the API returns subset of facial attributes. For example, you can specify the value as, [ALL] or [DEFAULT]. If you provide both, [ALL, DEFAULT], the service uses a logical AND operator to determine which attributes to return (in this case, it is all attributes). If you specify all attributes, Rekognition performs additional detection.

dfImage :: Lens' DetectFaces Image #

The image in which you want to detect faces. You can specify a blob or an S3 object.

Destructuring the Response

detectFacesResponse #

Creates a value of DetectFacesResponse with the minimum fields required to make a request.

Use one of the following lenses to modify other fields as desired:

  • dfrsOrientationCorrection - The algorithm detects the image orientation. If it detects that the image was rotated, it returns the degrees of rotation. If your application is displaying the image, you can use this value to adjust the orientation. For example, if the service detects that the input image was rotated by 90 degrees, it corrects orientation, performs face detection, and then returns the faces. That is, the bounding box coordinates in the response are based on the corrected orientation.
  • dfrsFaceDetails - Details of each face found in the image.
  • dfrsResponseStatus - -- | The response status code.

data DetectFacesResponse #

See: detectFacesResponse smart constructor.

Instances

Eq DetectFacesResponse # 
Data DetectFacesResponse # 

Methods

gfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> DetectFacesResponse -> c DetectFacesResponse #

gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c DetectFacesResponse #

toConstr :: DetectFacesResponse -> Constr #

dataTypeOf :: DetectFacesResponse -> DataType #

dataCast1 :: Typeable (* -> *) t => (forall d. Data d => c (t d)) -> Maybe (c DetectFacesResponse) #

dataCast2 :: Typeable (* -> * -> *) t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c DetectFacesResponse) #

gmapT :: (forall b. Data b => b -> b) -> DetectFacesResponse -> DetectFacesResponse #

gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> DetectFacesResponse -> r #

gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> DetectFacesResponse -> r #

gmapQ :: (forall d. Data d => d -> u) -> DetectFacesResponse -> [u] #

gmapQi :: Int -> (forall d. Data d => d -> u) -> DetectFacesResponse -> u #

gmapM :: Monad m => (forall d. Data d => d -> m d) -> DetectFacesResponse -> m DetectFacesResponse #

gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> DetectFacesResponse -> m DetectFacesResponse #

gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> DetectFacesResponse -> m DetectFacesResponse #

Read DetectFacesResponse # 
Show DetectFacesResponse # 
Generic DetectFacesResponse # 
NFData DetectFacesResponse # 

Methods

rnf :: DetectFacesResponse -> () #

type Rep DetectFacesResponse # 
type Rep DetectFacesResponse = D1 (MetaData "DetectFacesResponse" "Network.AWS.Rekognition.DetectFaces" "amazonka-rekognition-1.4.5-7kQQXfpD3s0BAUEtxNCePO" False) (C1 (MetaCons "DetectFacesResponse'" PrefixI True) ((:*:) (S1 (MetaSel (Just Symbol "_dfrsOrientationCorrection") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe OrientationCorrection))) ((:*:) (S1 (MetaSel (Just Symbol "_dfrsFaceDetails") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe [FaceDetail]))) (S1 (MetaSel (Just Symbol "_dfrsResponseStatus") NoSourceUnpackedness SourceStrict DecidedUnpack) (Rec0 Int)))))

Response Lenses

dfrsOrientationCorrection :: Lens' DetectFacesResponse (Maybe OrientationCorrection) #

The algorithm detects the image orientation. If it detects that the image was rotated, it returns the degrees of rotation. If your application is displaying the image, you can use this value to adjust the orientation. For example, if the service detects that the input image was rotated by 90 degrees, it corrects orientation, performs face detection, and then returns the faces. That is, the bounding box coordinates in the response are based on the corrected orientation.

dfrsFaceDetails :: Lens' DetectFacesResponse [FaceDetail] #

Details of each face found in the image.

dfrsResponseStatus :: Lens' DetectFacesResponse Int #

  • - | The response status code.