Swift 2, AudioConverter, Callback and Core Audio

Currently I am working on bcAnalyze 3 - our sound analysis tool. I decided to write it in Swift to learn the new language. While most of my doings went well resulting in a first bcAnalyze 3 light version... i hit a wall when starting on AudioConverters and a callback function completely written in Swift. While with some help I managed to define a callback, the AudioConverter didn't run but threw an error when calling it via AudioConverterFillComplexBuffer. After some digging I realized that the AudioBufferList supplied to the callback function was loosing its filled in values. After some days struggling I managed to get it running - I need only samplerate conversions, so, running to do that at least.

How to get it running?
I supplied a struct as user info in the callback. The struct holds a pointer to the sound data as well as the AudiobufferList... but I'll best show the code:

struct audioIO {
    var pos: UInt32 = 0
    var srcBuffer: Array<Float>
    var srcBufferSize: UInt32 = 0
    var srcSizePerPacket: UInt32 = 4
    var numPacketsPerRead:UInt32 = 0
    var maxPacketsInSound: UInt32 = 0
    var abl: AudioBufferList
}

func fillComplexCallback(myConverter: AudioConverterRef, packetNumber: UnsafeMutablePointer<UInt32>, ioData: UnsafeMutablePointer<AudioBufferList>, aspd: UnsafeMutablePointer<UnsafeMutablePointer<AudioStreamPacketDescription>>, userInfo: UnsafeMutablePointer<Void>) -> OSStatus {
    
    let myAIO = UnsafeMutablePointer<audioIO>(userInfo).memory
    if (packetNumber.memory > myAIO.numPacketsPerRead) {
        packetNumber.memory = myAIO.numPacketsPerRead
    }
    
    if (packetNumber.memory +  myAIO.pos > myAIO.maxPacketsInSound)
    {
        packetNumber.memory = myAIO.maxPacketsInSound - myAIO.pos
    }
    
    let soundData:Array<Float> = myAIO.srcBuffer
    let _buffer: Array<Float> = Array(soundData[Int(myAIO.pos)..<Int(myAIO.pos+packetNumber.memory)])
    let outByteSize = packetNumber.memory * 4
    UnsafeMutablePointer<audioIO>(userInfo).memory.pos = myAIO.pos + packetNumber.memory
    var abl = myAIO.abl
    
    abl.mBuffers.mDataByteSize = outByteSize
    abl.mBuffers.mNumberChannels = 1
    abl.mBuffers.mData = UnsafeMutablePointer<Void>(_buffer)
    UnsafeMutablePointer<AudioBufferList>(ioData).memory = abl
    
    return 0

}

While the above is defined in a general section, so not within a class for example, the following is part of a class:

func convertSampleRate(newSamplerate: Int) {
        var converter = AudioConverterRef()
        var inputFormat = clientFormat!
        var outputFormat = clientFormat!
        outputFormat.mSampleRate = Double(newSamplerate)

        let result = AudioConverterNew(&inputFormat, &outputFormat, &converter)
        if result != 0 {
            print("Audio error \(result)")
            return
        }
        
        if inputFormat.mSampleRate != outputFormat.mSampleRate {
            
            var prop = UInt32(kAudioConverterSampleRateConverterComplexity_Mastering)
            var err = AudioConverterSetProperty(converter,AudioConverterPropertyID(kAudioConverterSampleRateConverterComplexity), UInt32(sizeof(UInt32)), &prop)
            if err != 0 {
                print("Audio error 2 \(err)")
                return
            }

            prop = UInt32(kAudioConverterQuality_Max)
            err = AudioConverterSetProperty(converter,AudioConverterPropertyID(kAudioConverterSampleRateConverterComplexity), UInt32(sizeof(UInt32)), &prop)
            if err != 0 {
                print("Audio error 3 \(err)")
                return
            }
                        
            let outputSizePerPacket:UInt32 = 4
            
            let givenSize: UInt32 = 32768
            let packetsPerRead: UInt32 = 32768 / 4
            
            var numOutputPackets = givenSize / outputSizePerPacket
            var buffer = [Float]()
            
            var _buffer:Array<Float> = [Float](count: Int(numOutputPackets), repeatedValue: 0)
            var audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer( mNumberChannels: 1, mDataByteSize: givenSize, mData: &_buffer))
            
            var myAudioIO = audioIO(pos: UInt32(0), srcBuffer: soundData, srcBufferSize: givenSize, srcSizePerPacket: UInt32(4), numPacketsPerRead: packetsPerRead, maxPacketsInSound: UInt32(sampleCount), abl: audioBufferList)
            
            var outPos: UInt32 = 0
            
            while 1==1 {
                
                withUnsafeMutablePointers(&myAudioIO, &audioBufferList, &_buffer) { soundPointer, audioBufferPtr, bufferPtr in
                    err = AudioConverterFillComplexBuffer(converter, fillComplexCallback, soundPointer , &numOutputPackets, audioBufferPtr, nil)
                }
                
                //print(audioBufferList)
                
                if err != 0 {
                    print("Audio error 4 \(err)")
                    break
                }
                if (numOutputPackets < 1) {
                    // this is the EOF conditon
                    break;
                }
                buffer += Array(_buffer[0..<Int(numOutputPackets)])
                outPos += numOutputPackets
            }
            
            self.soundData = buffer
            self.sampleCount = buffer.count
            
        }

    }

I store sound as a float array within that class, so I made a couple of assumptions in the above code. But it should give an idea who to solve the problem for your data. See also my post at Apple developer forums.

Beliebte Posts aus diesem Blog

EEG, Grüne Energie und die Folgen

Aussagekraft der 7-Tage-Inzidenz