It has been a while since I coded CharpHat, which is an app that lets you snap a picture of anything and then put a nice C# graduation cap on it. That app was far from perfect, but it helped me to practice the usage of custom page renderers.

Today I decided to retake that project, but this time trying to isolate the code needed to build the interface and funcionality of the page, so that anyone looking forward to implement a full camera page in their apps could reuse the code for their own projects. So be sure to grab the source code for this post.

Forms abstractions

Here is the source code for this section.

Let’s start by creating the Xamarin.Forms page that will serve as our point of interaction with the custom code:

public class CameraPage : ContentPage
    public delegate void PhotoResultEventHandler(PhotoResultEventArgs result);
    public event PhotoResultEventHandler OnPhotoResult;

Business as usual, create a class deriving from ContentPage. I have added an event handler as I want to access the picture taken by the user. Now let’s throw in some methods to call whenever an user performs an action in our camera page (in this case, the user will be allowed to take a photo or cancel the action):

public void SetPhotoResult(byte[] image, int width = -1, int height = -1)
    OnPhotoResult?.Invoke(new PhotoResultEventArgs(image, width, height));

public void Cancel()
    OnPhotoResult?.Invoke(new PhotoResultEventArgs());

For reference, see the properties inside the PhotoResultEventArgs class:

public bool Success { get; private set; }
public int Width { get; private set; }
public int Height { get; private set; }
public byte[] Image { get; private set; }

Now, time to move on to the platform specifics.

In Xamarin.iOS

Here is the source code for this section.

To be honest, this implementation is the easiest by far. Start off by creating a class that inherits from PageRenderer, and to add the ExportRenderer attribute:

[assembly: ExportRenderer(typeof(CameraPage), typeof(CameraPageRenderer))]
namespace FullCameraPage.iOS
	public class CameraPageRenderer : PageRenderer

Now, an this is very important, you need to override the ViewDidLoad method, since it gets called as soon as our page is loaded by the iOS mechanisms. For the sake of organisation let’s split the code in several other methods:

public override async void ViewDidLoad()


As the name states, here is where you need to build the UI. As you may have guessed, it is all done by code, but don’t worry, it is very easy… as long as your UI isn’t so complex, but you can do whatever you need here.

For this sample the UI will consist of a couple of buttons and a surface where the live preview from the camera is going to be shown, so you need to declare them on a class-level scope:

VectorButton takePhotoButton;
VectorButton cancelPhotoButton;
UIView liveCameraStream;

To set the items in place you need to think as if you were working with a relative layout, meaning that you need to set the position of each item within the screen. For example, look at how the live camera preview view is positioned:

private void SetupUserInterface()
    // Code ommited ...
    liveCameraStream = new UIView()
        Frame = new CGRect(0f, 0f, View.Bounds.Width, View.Bounds.Height)
    // Code ommited ...
    // Code ommited ...


Now that the UI has been built, let’s hook up the event handlers to each control, luckly for this sample there are only two buttons on screen: one to take the picture and the other to cancel the whole thing.

cancelPhotoButton.TouchUpInside += (s, e) =>
    (Element as CameraPage).Cancel();

takePhotoButton.TouchUpInside += async (s, e) =>
    var data = await CapturePhoto();
    UIImage imageInfo = new UIImage(data);

    (Element as CameraPage).SetPhotoResult(data.ToArray(),

The property Element contains a reference to the page associated to the renderer, and is our way to interact with our Forms project. As for the method CapturePhoto… we’ll see it later.


Now it’s time to ask the user for its permission to access the camera:

var authorizationStatus = AVCaptureDevice.GetAuthorizationStatus(AVMediaType.Video);
if (authorizationStatus != AVAuthorizationStatus.Authorized)
    await AVCaptureDevice.RequestAccessForMediaTypeAsync(AVMediaType.Video);

But wait a minute, before executing the code above, make sure you have added the key Privacy - Camera Usage Description to the Info.plist in your project.


Now the “heavy” stuff.

Start by declaring at class-level scope an AVCaptureSession, AVCaptureDeviceInput and AVCaptureStillImageOutput, as they will hel us access the camera, display the live feed and capture the photo.

Then, inside the SetupLiveCameraStream method, initialize the capture session, create a preview layer with the same size as our liveCameraStream, and add it as a sublayer of it:

    captureSession = new AVCaptureSession();
    var videoPreviewLayer = new AVCaptureVideoPreviewLayer(captureSession)
        Frame = liveCameraStream.Bounds

Next, “create” a capture device (you can configure it to work according to your needs). And then, from it create the an input source for the capture session:

    var captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);
    captureDeviceInput = AVCaptureDeviceInput.FromDevice(captureDevice);

We have an input (the camera of the device), now we need an output which is going to be a jpeg photograph:

    var dictionary = new NSMutableDictionary();
    dictionary[AVVideo.CodecKey] = new NSNumber((int)AVVideoCodec.JPEG);
    stillImageOutput = new AVCaptureStillImageOutput()
        OutputSettings = new NSDictionary()

Finalize by setting the input and output of the capture session and starting it:



At last, the icing on the cake, the code to capture the photo. The code is pretty simple: Take the output and capture a still image from it, as we only need the bytes we get an NSData containing the taken photo.

public async Task<NSData> CapturePhoto()
    var videoConnection = stillImageOutput.ConnectionFromMediaType(AVMediaType.Video);
    var sampleBuffer = await stillImageOutput.CaptureStillImageTaskAsync(videoConnection);
    var jpegImageAsNsData = AVCaptureStillImageOutput.JpegStillToNSData(sampleBuffer);
    return jpegImageAsNsData;

In Xamarin.Android

Here is the source code for this section.
This implementation isn’t as clean as it is in iOS. Mainly because Android puts a lot of ephasis in the use of listeners, rather than in event handlers. However, that is not a problem for us.

As with the iOS implementation, start by creating a new class and make it derive from PageRenderer and also make it implement the TextureView.ISurfaceTextureListener interface. Don’t forget the ExportRender attribute:

[assembly: Xamarin.Forms.ExportRenderer(typeof(CameraPage), typeof(CameraPageRenderer))]
namespace FullCameraPage.Droid
    public class CameraPageRenderer : PageRenderer, TextureView.ISurfaceTextureListener

Then, override the OnElementChanged method (if you have creaated custom renderers before this method may be familar to you), this method is going to be called everytime the a CamerPage is shown on screen:

protected override void OnElementChanged(ElementChangedEventArgs<Xamarin.Forms.Page> e)


In this method we are supposed to create the camera page itself, you can do it by creating an axml file and calling all the Android inflating stuff… Or, like in this sample, you can create it by code.

For this sample, we’ll need a RelativeLayout to work as a container, a TextureView to display the live feed from the camera, and a Button (a PaintCodeButton actually) to snap the photograph. Declare all them at class-level scope:

RelativeLayout mainLayout;
TextureView liveView;
PaintCodeButton capturePhotoButton;

Now, proceed to create them and add them to the screen, for example, see how we can create the container layout and add the TextureView to it:

void SetupUserInterface()
    mainLayout = new RelativeLayout(Context);
    RelativeLayout.LayoutParams mainLayoutParams = new RelativeLayout.LayoutParams(
    mainLayout.LayoutParameters = mainLayoutParams;

    liveView = new TextureView(Context);
    RelativeLayout.LayoutParams liveViewParams = new RelativeLayout.LayoutParams(
    liveView.LayoutParameters = liveViewParams;
    // Code ommited...


Before continuing, there is another method (OnLayout) we need to override to give our main layout it’s size (and acommodate the UI accordingly):

protected override void OnLayout(bool changed, int l, int t, int r, int b)
    base.OnLayout(changed, l, t, r, b);
    if (!changed)
    var msw = MeasureSpec.MakeMeasureSpec(r - l, MeasureSpecMode.Exactly);
    var msh = MeasureSpec.MakeMeasureSpec(b - t, MeasureSpecMode.Exactly);
    mainLayout.Measure(msw, msh);
    mainLayout.Layout(0, 0, r - l, b - t);

    capturePhotoButton.SetX( mainLayout.Width / 2 - 60);
    capturePhotoButton.SetY(mainLayout.Height - 200);


As I said, Android relies mostly on event listeners rather than handlers, so the code for this method is pretty simple. We need to set an event handler for the “sutter” button and assign the listener that will be aware of the SurfaceTexture status (remember that our page render implements an interface?):

capturePhotoButton.Click += async (sender, e) =>
    var bytes = await TakePhoto();
    (Element as CameraPage).SetPhotoResult(bytes, liveView.Bitmap.Width, liveView.Bitmap.Height);
liveView.SurfaceTextureListener = this;

And one more thing, let’s to override the default behavior of the “back” button, so that it acts as a cancel button for the camera:

public override bool OnKeyDown(Keycode keyCode, KeyEvent e)
    if (keyCode == Keycode.Back)
        (Element as CameraPage).Cancel();
        return false;
    return base.OnKeyDown(keyCode, e);

TextureView.ISurfaceTextureListener implementation

Now is time to implement the core of our page. Start by writing the code for the OnSurfaceTextureAvailable where we will prepare the output for the camera, but first we’ll need a camera, right?

At class-level scope declare a Camera:

Android.Hardware.Camera camera;

Inside the method, open the camera (by default it’ll try to open the back camera of the device) and get its parameters. We need them to select the right preview size, because we want things to look great in our app:

camera = Android.Hardware.Camera.Open();
var parameters = camera.GetParameters();

Once we have the parameters at hand, we can get the avaliable PreviewSizes and get the one that fits our preview surface. In this case I’m using a simple linq expression to get the best preview size based on aspect ratio:

var aspect = ((decimal)height) / ((decimal)width);

var previewSize = parameters.SupportedPreviewSizes
                            .OrderBy(s => Math.Abs(s.Width / (decimal)s.Height - aspect))

parameters.SetPreviewSize(previewSize.Width, previewSize.Height);

Finish by setting our surface as the preview texture, at this point the only thing left to do is to start the camera:


The other method we need to write code into is OnSurfaceTextureDestroyed in order to stop the camera, so just write the following inside and it’ll be all:

return true;

StartCamera and StopCamera

These two methods are quite simple too, for StartCamera we only need to rotate the preview to make it look right in the screen (in this case I’m setting it to be viewed vertically), and then finally, start the camera:


The StopCamera method stops the preview and releases the camera, so that other apps can access to it:



In order to get a photo, the only thing we need to do is get an sitll image from the live feed presented in the TextureView, here is the code to do so and then return the image in bytes:

var ratio = ((decimal)Height) / Width;
var image = Bitmap.CreateBitmap(liveView.Bitmap, 0, 0, liveView.Bitmap.Width, (int)(liveView.Bitmap.Width * ratio));
byte[] imageBytes = null;
using (var imageStream = new System.IO.MemoryStream())
    await image.CompressAsync(Bitmap.CompressFormat.Jpeg, 50, imageStream);
    imageBytes = imageStream.ToArray();
return imageBytes;

And that’s it, after all that code, you can now make use of this camera page. Keep reading to find a sample usage code:

Usage in Forms

var cameraPage = new CameraPage();
cameraPage.OnPhotoResult += CameraPage_OnPhotoResult;
// ...
async void CameraPage_OnPhotoResult(Pages.PhotoResultEventArgs result)
    await Navigation.PopModalAsync();
    if (!result.Success)
    Image.Source = ImageSource.FromStream(() => new MemoryStream(result.Image));

If you download the source code and run it, you will see something like this:


The code for this post was entirely based on the code from the CharpHat, which at the same time was based on the Moments app by Pierce Boggan.

Having doubts? comments?
Share this