Currently the save size is based on the window size, rather than the source dimensions. It would be good to have the saved image dimensions to be the same as the source dimensions. Combined with the changes proposed in issue #12, this would likely become more useful.
What I have explored
I did explore simply setting the window size according to the dimensions of the device activeFormat, but in certain cases this results in a Window that is larger than the screen, which is less than ideal.
An alternative approach would be to have an offscreen window that contains the capture frame, with the applied transforms and then that is used for the image data. This would avoid the need to hide the border of the window, which is currently being done now.
This is the code I was experimenting with in the startCaptureWithVideoDevice() method:
self.input = try AVCaptureDeviceInput(device: device);
self.captureSession.addInput(input);
self.captureSession.startRunning();
self.captureLayer = AVCaptureVideoPreviewLayer(session: self.captureSession);
self.captureLayer.connection?.automaticallyAdjustsVideoMirroring = false;
self.captureLayer.connection?.isVideoMirrored = false;
// find the active format dimensions
let formatDescription = self.input.device.activeFormat.formatDescription;
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
let width = CGFloat(dimensions.width);
let height = CGFloat(dimensions.height);
let resolution = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height));
self.playerView.layer = self.captureLayer;
self.playerView.controlsStyle = AVPlayerViewControlsStyle.none;
self.playerView.layer?.backgroundColor = CGColor.black;
self.windowTitle = String(format: "Quick Camera: [%@]", device.localizedName);
self.window.title = self.windowTitle;
// apply the active format dimensions to the window
self.window.setContentSize(NSSize(width: width, height: height));
self.window.setFrame(
NSRect(x: self.window.frame.origin.x, y: self.window.frame.origin.y, width: width, height: height), display: true, animate: true);
fixAspectRatio();
I am currently looking at AVCapturePhotoOutput as possible approach, but macOS programming is not my usual focus.
Currently the save size is based on the window size, rather than the source dimensions. It would be good to have the saved image dimensions to be the same as the source dimensions. Combined with the changes proposed in issue #12, this would likely become more useful.
What I have explored
I did explore simply setting the window size according to the dimensions of the device
activeFormat, but in certain cases this results in a Window that is larger than the screen, which is less than ideal.An alternative approach would be to have an offscreen window that contains the capture frame, with the applied transforms and then that is used for the image data. This would avoid the need to hide the border of the window, which is currently being done now.
This is the code I was experimenting with in the
startCaptureWithVideoDevice()method:I am currently looking at AVCapturePhotoOutput as possible approach, but macOS programming is not my usual focus.