Snowy Monkey

Building a software synth

UIViewController methods not being called? February 21, 2010

Filed under: iphone,xcode — snowy monkey @ 10:53 pm
Tags: , , ,

Is your UIViewController not working correctly? Maybe viewDidLoad is being called, but nothing else seems to be? Check your view controller in Interface Builder, check the connections – is your view connected? Disconnect it and link it manually in your loadView method. Now your other methods will magically be called correctly. No idea why this is.

I’ve just spend the better part of an afternoon & evening trying to figure this out, so thought I’d share it….

Advertisements
 

Weird OpengGL Screen Rotation Bug February 17, 2010

Filed under: iphone,screenshot,synth — snowy monkey @ 8:15 pm
Tags: , ,

I’m trying to make the synth behave nicely on the iphone by having it auto rotate when the screen rotates. The code I have works very well in the simulator, but shows screen corruption after the rotation. If you then rotate back, it displays well again. So it seems my resize code maybe isn’t up to snuff.

According to this post on Red Glasses (this link’s down now, apparently it was a draft version up for a select few to read, but google got there early), it may be related to CALayer/UIView resizing. Looking at the behaviour, this seems likely.

Anyway, here’s the before & after. Any suggestions?

img2img1

 

One more for Valentine’s Day February 14, 2010

Filed under: iphone,screenshot,synth — snowy monkey @ 11:46 pm
Tags: , ,

One more post for Valentine’s day.  I quite like the disco floor colours of this one.  The Synth needs a distinct look.  (Never mind a distinct name!)

 

Synth, GLified

Filed under: iphone,screenshot,synth — snowy monkey @ 11:15 pm
Tags: ,

Synth 14/2/2010

Here’s a grab of the latest version of Synth. Not much has been going on at a superficial level, but quite a bit has changed under the hood:

  • OpenGL frontend – dabbled with OpenFrameworks, then wanted to be a bit more iphone friendly so tried CoreAnimation, then wanted some decent performance so implemented a simple 2D OpenGL engine.
  • Caching of notes – the audio is still synthesized, but given the simple nature of the notes, I cache them on start up. It doesn’t use up much memory and is a hell of a lot faster than calculating them in realtime.
  • And of course tried it out in the iPad simulator. It’s looking nice there – big chunky buttons!

So whats next?

  • Been trying a bit of rotation detection, but it’s not quite there yet.
  • Better looks – it needs nicer button textures, an icon, a splash screen and a website.
  • Layers – it’s gotten to the stage where it’s fun to play, and sounds nice. Layers would make all the difference when playing.
  • Octave changes – it’s all there in the synth, just not exposed in the GUI.

There would still be loads to do after that (e.g., loading and saving compositions), but for a 1.0, maybe even a released version, I think I’d need most of the above.

 

My Audio Setup Code February 6, 2010

Filed under: iphone — snowy monkey @ 11:22 am
Tags: , ,

I previously stated that I had a fix for the AudioOutputUnitStart -66681 error, then updated the post stating I’d failed.

Today, Sijo asked if I had any idea about how to fix the problem. Unfortunately I don’t…. But I haven’t had that error for a long time, so I figured I might as well post my audio setup code here (along with the callbacks). Hopefully it’s a useful comparison. It works, and is reasonably documented (IMO). It’s not my exact code, it’s a couple of classes copied & pasted, so it’ll have to be adapted somewhat. Let me know if there are any problems.

First, the AudioSourceInterface. Implement this to provide the audio samples.

#pragma once

#include <Coreaudio/CoreAudioTypes.h>       // All this for SInt16....


class AudioSourceInterface
{
public:
    virtual void audioRequestedFloat(float* output, int numSamples, int numChannels) = 0;
    virtual void audioRequestedInt(SInt16* output, int numSamples, int numChannels) = 0;
};

Now the AudioEngine base class. First the header, which is later subclassed for the iphone implementation.

#pragma once

#include <AudioUnit/AudioUnit.h>


class AudioSourceInterface;

class AudioEngine
{
public:
    AudioEngine();
    virtual ~AudioEngine() {}
    
    void setup(AudioSourceInterface* audioSource);
    void exit();
    
    
protected:
    AudioUnit               mAudioUnit;
    AudioSourceInterface*   mAudioSource;
    float                   mSampleRate;


    
    virtual void createAudioUnit() = 0;
    virtual void enableAudioUnit() {}
    void setupAudioUnit();


    void setupIntAudioStream(AudioStreamBasicDescription* audioFormat) const;
    void setupFloatAudioStream(AudioStreamBasicDescription* audioFormat) const;
    
    
    static OSStatus floatRenderer(void*                       inRefCon,
                                  AudioUnitRenderActionFlags* ioActionFlags,
                                  const AudioTimeStamp*       inTimeStamp,
                                  UInt32                      inBusNumber,
                                  UInt32                      inNumberFrames,
                                  AudioBufferList*            ioData);
    
    static OSStatus intRenderer(void*                          inRefCon,
                                AudioUnitRenderActionFlags*    ioActionFlags,
                                const AudioTimeStamp*          inTimeStamp,
                                UInt32                         inBusNumber,
                                UInt32                         inNumberFrames,
                                AudioBufferList*               ioData);
};

And the AudioEngine cpp body…

#include "AudioEngine.h"

#include <AudioUnit/AudioUnit.h>
#include <AudioUnit/AudioComponent.h>

#include "AudioSourceInterface.h"

#include "Core.h"


AudioEngine::AudioEngine():
    mSampleRate(kSampleRate)
{
}


void AudioEngine::setup(AudioSourceInterface* audioSource)
{
    mAudioSource = audioSource;
    
    createAudioUnit();
    enableAudioUnit();
    setupAudioUnit();
}


void AudioEngine::setupAudioUnit()
{
	// We tell the Output Unit what format we're going to supply data to it
	// this is necessary if you're providing data through an input callback
	// AND you want the DefaultOutputUnit to do any format conversions
	// necessary from your format to the device's format.
	AudioStreamBasicDescription audioFormat;
    setupIntAudioStream(&audioFormat);
    
    OSStatus err = AudioUnitSetProperty(mAudioUnit, kAudioUnitProperty_StreamFormat,
                                        kAudioUnitScope_Input, 0, &audioFormat, sizeof(audioFormat));
    assert(err == noErr);
    
    
    // Initialize unit
	err = AudioUnitInitialize(mAudioUnit);
    assert(err == noErr);
    
    
    // Set render callback
    AURenderCallbackStruct input;
	input.inputProc = intRenderer;
	input.inputProcRefCon = this;
    err = AudioUnitSetProperty(mAudioUnit, kAudioUnitProperty_SetRenderCallback,
                               kAudioUnitScope_Input, 0, &input, sizeof(input));
    assert(err == noErr);
    
    
	// Start the rendering
	// The DefaultOutputUnit will do any format conversions to the format of the default device
	err = AudioOutputUnitStart(mAudioUnit);
    assert(err == noErr);
    
    // we call the CFRunLoopRunInMode to service any notifications that the audio
    // system has to deal with
	CFRunLoopRunInMode(kCFRunLoopDefaultMode, 2, false);
}


void AudioEngine::exit()
{
	OSStatus err = AudioOutputUnitStop(mAudioUnit);
    assert(err == noErr);
	
    err = AudioUnitUninitialize(mAudioUnit);
    assert(err == noErr);
    
    err = AudioComponentInstanceDispose(mAudioUnit);
    assert(err == noErr);
}


void AudioEngine::setupIntAudioStream(AudioStreamBasicDescription* audioFormat) const
{
    memset(audioFormat, 0, sizeof(audioFormat));
    audioFormat->mSampleRate		= mSampleRate;
    audioFormat->mFormatID			= kAudioFormatLinearPCM;
    audioFormat->mFormatFlags		= kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat->mFramesPerPacket	= 1;
    audioFormat->mChannelsPerFrame	= 1;
    audioFormat->mBitsPerChannel	= 16;
    audioFormat->mBytesPerPacket	= 2;
    audioFormat->mBytesPerFrame		= 2;
}


void AudioEngine::setupFloatAudioStream(AudioStreamBasicDescription* audioFormat) const
{
    memset(audioFormat, 0, sizeof(audioFormat));
    audioFormat->mSampleRate		= mSampleRate;
    audioFormat->mFormatID			= kAudioFormatLinearPCM;
    audioFormat->mFormatFlags		= kAudioFormatFlagIsFloat | kLinearPCMFormatFlagIsNonInterleaved;
    audioFormat->mFramesPerPacket	= 1;
    audioFormat->mChannelsPerFrame	= 2;
    audioFormat->mBitsPerChannel	= 32;
    audioFormat->mBytesPerPacket	= 4;
    audioFormat->mBytesPerFrame		= 4;
}


OSStatus AudioEngine::floatRenderer(void*                       inRefCon,
                                    AudioUnitRenderActionFlags* ioActionFlags,
                                    const AudioTimeStamp*       inTimeStamp,
                                    UInt32                      inBusNumber,
                                    UInt32                      inNumberFrames,
                                    AudioBufferList*            ioData)
{
    assert(inNumberFrames <= kAudioBufferSizeFloats);
    
    AudioEngine* self = (AudioEngine*)inRefCon;
    
    float* buffer = static_cast<float*>(ioData->mBuffers[0].mData);
    uint bufferSize = ioData->mBuffers[0].mDataByteSize;
    
    // Get synth sound data for one channel
    self->mAudioSource->audioRequestedFloat(buffer, inNumberFrames, 1);
    
    
    // Duplicate single channel across all channels
    for (UInt32 channel = 1; channel < ioData->mNumberBuffers; channel++)
		memcpy (ioData->mBuffers[channel].mData, buffer, bufferSize);
    
    return noErr;
}


OSStatus AudioEngine::intRenderer(void*                          inRefCon,
                                  AudioUnitRenderActionFlags*    ioActionFlags,
                                  const AudioTimeStamp*          inTimeStamp,
                                  UInt32                         inBusNumber,
                                  UInt32                         inNumberFrames,
                                  AudioBufferList*               ioData)
{
    assert(inNumberFrames <= kAudioBufferSizeFloats);
    
    AudioEngine* self = (AudioEngine*)inRefCon;
    
    SInt16* buffer = static_cast<sint16*>(ioData->mBuffers[0].mData);
    
    
    // Get synth sound data for one channel
    self->mAudioSource->audioRequestedInt(buffer, inNumberFrames, 1);
    
    
    // Duplicate single channel across all channels
    uint bufferSize = ioData->mBuffers[0].mDataByteSize;
    for (UInt32 channel = 1; channel < ioData->mNumberBuffers; channel++)
		memcpy (ioData->mBuffers[channel].mData, buffer, bufferSize);

    return noErr;
}

The AudioEngineIPhoneImpl header.

#pragma once

#include "AudioEngine.h"


class AudioEngineIPhoneImpl: public AudioEngine
{
protected:
    void enableAudioUnit();
    void createAudioUnit();
};

And the AudioEngineIPhoneImpl body.

#include "AudioEngineIPhoneImpl.h"

#include <AudioUnit/AudioComponent.h>


// http://michael.tyson.id.au/2008/11/04/using-remoteio-audio-unit/


const int kOutputBus = 0;
const int kInputBus = 1;


void AudioEngineIPhoneImpl::createAudioUnit()
{
    // Create the audio component instance
    AudioComponentDescription desc;
	desc.componentType = kAudioUnitType_Output;
	desc.componentSubType = kAudioUnitSubType_RemoteIO;
	desc.componentManufacturer = kAudioUnitManufacturer_Apple;
	desc.componentFlags = 0;
	desc.componentFlagsMask = 0;
    
    AudioComponent comp = AudioComponentFindNext(NULL, &desc);
    assert(comp);
	
	OSStatus err = AudioComponentInstanceNew(comp, &mAudioUnit);
    assert(err == noErr);
}


void AudioEngineIPhoneImpl::enableAudioUnit()
{
	// Disable input - flag must be 0 to enable larger speakers,
    // otherwise small ear speaker is used.
    UInt32 flag = 0;
	OSStatus err = AudioUnitSetProperty(mAudioUnit, kAudioOutputUnitProperty_EnableIO,
                                        kAudioUnitScope_Input, kInputBus, &flag, sizeof(flag));
    assert(err == noErr);
    
	// Enable IO for output
    flag = 1;
	err = AudioUnitSetProperty(mAudioUnit, kAudioOutputUnitProperty_EnableIO,
                               kAudioUnitScope_Output, kOutputBus, &flag, sizeof(flag));
    assert(err == noErr);
}

Hopefully all that is a good comparison!

 

How to Access iPhone File Paths Properly January 10, 2010

Filed under: iphone — snowy monkey @ 6:38 pm
Tags: , ,

Paraphrased from Erica Sadun on TUAW over 2 years ago, but very useful for the newbie.

You want to access a data file in your iphone app.  Let’s call it ‘MyAppSettings.xml’.  First thing to do is create it and add it into your Xcode Resources folder (although it can physically reside anywhere.  This should automatically be added to the ‘Copy Bundle Resources‘ list in your target.  When you next build your app, your file will be in the root of your bundle.  You can check by going into the app bundle in your build directory. Right-click build/Release-iphoneos/MyApp (for example) and select ‘Show Package Contents‘.  You should see your file there.

You find the proper path to this file using the following statement:

[[NSBundle mainBundle] pathForResource:@"MyAppSettings" ofType:@"xml" inDirectory:""]]

This will return a NSString with the full path you need.

 

Xcode Developer Documentation is Crashy January 8, 2010

Filed under: Uncategorized — snowy monkey @ 7:58 am
Tags:

I really like most of the developer documentation app that is part of Xcode.  It’s nice and fast, better than accessing it from the browser.  The part I don’t like is that it crashes far too frequently for a piece of non-beta professional software.  And the part I really hate is that when it crashes, it takes the rest of Xcode with it.  I’d love to be able to run it externally to Xcode.  Even better would be if it didn’t crash!  Finally, it’s a bit annoying not being able to cmd-tab between the main IDE window and the docs – if I’ve got more then one other IDE window, cycling to and from the documentation is a pain.

Tags: