Just another Blog on iOS development
Recently I was involved in getting calabash automation tests running on Jenkins and ran into a problem with the simulator. If you see this error:
Failed to authorize rights (0x1) with status: -60007. 2015-05-27 11:06:32.998 instruments[89714:3936018] -[XRSimulatorDevice prepareConnection:]: Unable to authorize simulated daemon (89737): 8
It basically means that you have configured your Jenkins instance to be started up as either a LaunchDaemon or LaunchAgent this didn’t use to cause a problem but on yosemite it does. The only way I found to get around this was to create an Automation application which runs the Jenkins process on user login. The bad thing is if you reboot your CI Mac box you will need to make sure you log the user in to get Jenkins to startup.
The following are the steps to perform this:
- Launch Applications -> Automator
- On Launch select New Document
- Choose the Application options
- Then under Library select Utilities and then Run Shell Script
- Then in the Shell script box enter your run details. I have included an example:
/Library/Application\ Support/Jenkins/jenkins-runner.sh &> /var/log/jenkins/jenkins.log &
- Then save it via File -> Save and give it a name
- Then goto System Preferences -> Users & Groups
- Select your user in the list and then click on Login Items tab
- From there click the + and add your automation script and enable it
- From there the jenkins application will always startup on user login.
I recently ran into this problem where my user had brought up the keyboard on a field in a totally different view on my screen. They then went on to click on a button to bring up a UIPopover. The keyboard remained up which pushed the whole UIPopover up. To get around this problem I found like a top level resign responder method. This will dismiss the keyboard no matter how deep I am in the view hierarchy of my app.
So you have finished testing your app out in the wild and you are ready to get it submitted into the App Store. There are a few tools that you can use once you are out in the wild which will let you know when things have gone wrong and what people are doing in your App and how often.
When things go wrong
When an application crashes out in the wild and you need to know why. There is a awesome tool called Crashlytics. It will log the details of the error and how often it is happening. It will also let you know things like:
- Device it happened on
- Battery Status
- Orientation of Device
- Jailbreak status
- Were they on a Wi-Fi Network
This tool is invaluable to debug those nasty crashes out in the App wild. The tool is even free to use. All data that is broadcast over the network is SSL encrypted. Standard obfuscation rules apply to your iOS apps if you are dealing with private user data.
Know how and when your App is used
Okay so you want to know, how and when a user uses your App. You are going to need some analytics embedded in there. Analytics are very important to a Mobile App and will let you know now only how a user engages with your application but also if you have any In-App purchases what is being bought, when and what type of device was it bought on.
These are just a few of the things you can do with in App analytics. When adding analytics to your App it is very important that you don’t just tack it in. You really need to sit down and work out what you want to track and how you want to track it. Also what is important for you to measure the success of the App or have you can make your App succeed.
So I guess onto the choice of platforms for mobile analytics. There are two that come to my mind Google Analytics and Flurry. Google Analytics has come a long way and I would recommend it. Also it seems to better visually represent the data. Anyway check the two out and make what you think is the best choice.
Some Links to What was mentioned
- http://try.crashlytics.com/ (Is free to use, keeps track of when things go wrong, give great detail around it)
- Google Mobile Analyics (Is free to use and can even track In-App purchases, also can do some crash analytics stuff.)
What is happening now?
2013 has been the year of the wearable device. The market is really hotting up and a lot of different products are coming out. Wearable devices are no longer for your IT techie geeks out there. Businesses are now investing some of there R&D into turning there product into a stylish fashion accessory. This has made wearable devices stylish and made them not seen as just a IT techie must have.
You might ask yourself now, where does this momentum come from? A lot of this wearable device goodness has come from crowd funding websites like:
These sites have allowed what were just concepts and prototypes to get the kind of funding they needed to become a commercially viable and sellable product. As well as provide backers feedback on the progress of what they have backed. A good example of this is the second most highly crowd-funded project of all time as of now the “Pebble‘ which got $10 million from kickstarter. The “Pebble” is a smart watch which uses bluetooth on your Android or iPhone to:
- Control music
- Upload custom watch faces
- Integrate with native Apps
- Provide Caller ID and Message notifications
A picture of Pebble controlling music
The next question is where is all this tech going?
A lot of the wearable devices that are coming out to market at the moment are focused mainly at the health and fitness industry. They allow people to wear devices on there body which can track:
- How they exercise
- distance travelled
- calories burned
- Monitor how long and how well you sleep
A good example of this is Fitbit which recently received $43 million in venture capital funding to further grow there digital fitness tracker and health devices product catalog beyond just Fitbit.
A picture of Fitbit device
But we are starting to see a lot of devices coming out which are focused more in the visionary area (Google glass, Meta glasses) and are using concepts like AR to overlay relevant information to the consumer. Examples of this are:
- Overlaid navigation
- Virtual tourism
- POV (Point of View) videography and photography
- Information Overlay
Finally what can you as a developer do with these devices?
A lot of these devices don’t necessarily have a software ecosystem accompanying them. Instead the businesses behind them are providing SDK’s for developers (in most cases free) to create there own applications or integrate into existing ones which can easily tap into the features of the devices they are building. This means they can focus on the core product and allow the developer community out there to create there software ecosystem for them and build some interesting software based concepts utilising the technology.
You may ask what types of languages will I need to learn to develop for these devices. Just a few are:
- Unity 3D
As you can see because you are getting potentially to such a low level in some cases you will need to brush up on your C and C++ coding abilities. This just means the usual advantages that other languages give you may not be able to be enabled on such a small and potentially not very powerful device.
Well that is a wrap for the article. If you have anymore questions around Wearable Devices or how any devices which exist on the market now could be integrated into your business somehow. Don’t hesitate in getting in contact with me.
- DerivedData Eliminator (Allows you to add a button to the IDE to remove derived data as opposed to the multiple key strokes required)
- VVDocumentor (Creates a Javadoc style comment template for methods etc. Usual for if you use doxygen)
- Uncrustify Plugin (Allows you to apply uncrustify to current active file or entire project)
- JDPluginManager (Allows you to easily update your plugins. Alcatraz doesn’t support this yet)
- OCMock (A object stubber for iOS) http://ocmock.org/
- TestFlightApp (beta app testing) https://www.testflightapp.com/
While deciding to move from textmate to sublime as my text editor of choice. I noticed that I now had multiple open with options appearing in my right-click menu.
I went on the internet and found this snippet that can fix the problem. By using a terminal window:
/System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/\ LaunchServices.framework/Versions/A/Support/lsregister -kill -r -domain local\ -domain system -domain user
All you have to do is copy and paste this into Terminal and run it… (note: it takes a little bit of time for it to run so be patient)
Relaunch Finder (control+option+click on Finder icon in the Dock)…
Ta-da! The duplicate or old items are now gone!
The following are steps that I used to fix my support for Microsoft Communicator on my install of Adium on my Mac. Enjoy!
- Delete SIPE plugin if you already have it installed
- Quit Adium
- Download and Unzip SIPE Plugin Located here: http://users.rcn.com/zer0/SIPEAdiumPlugin.AdiumLibpurplePlugin(64bit).zip
- Double Click Plugin which will automatically install Plugin and start up Adium
- Quit Adium again
- Create the following Symlinks with in the /Applications/Adium.app/Contents/Frameworks/libpurple.framework/Versions directory:
ln -s Current /Applications/Adium.app/Contents/Frameworks/libpurple.framework/Versions/0.10.0 ln -s Current /Applications/Adium.app/Contents/Frameworks/libglib.framework/Versions/2.0.0 ln -s Current /Applications/Adium.app/Contents/Frameworks/libintl.framework/Versions/8
- Add your Office Communicator Account settings back and you should be able to connect again.
Recently I have been involved on a project where we needed to build a mobile application within 2 weeks. The project needs to access an API which doesn’t exist yet and it all needs to come together at the end with minimal testing and minimal integration refactoring.
You are probably thinking this is all impossible. Well it can be done but not easily. All you need to do is be persistant.
- First create an initial API contract and data model
- Create code from initial API contract and mock the call backs.
- Generate mock data as close as possible to what the API will provide.
- Try not to make changes to the API as much as possible. (Adding fields is easier then complete redesign)
- Keep probing the API developer on when they have finished so you can start testing integration.
Following this pattern although not fool proof can make it doable to develop an API and UI at the same time.
I am currently working on a iOS mobile project for a client which would be considered quite large for a mobile project. The project is made up of two streams iPad and iPhone.
We have 6 developers in total spread across the two applications which effectively do the same thing but have different user experience to differentiate them.
There is a team of testers 5 in total and the project manager. The testers effectively sit there running manual test cases on the application on all the different devices and OS versions we are supporting.
This seems to happen on every single iteration of the project. I guess some observations that I have made is that. To many developers can spoil the broth as you say. Sometimes people don’t ask questions and they reinvent the wheel making the codebase more complicated. Sometimes developers don’t pay enough attention to detail and defects keep going backward and forward as more edge cases come up. More capable developers can be overloaded to fix complicated issues.
The next thing is how do you remove the physical aspect of testing or at least reduce the cost of it on a mobile project.
There are different frameworks like KIF and cucumber which can reduce this but they don’t necessarily stamp out the need for physical testers. Also if you haven’t started with them it can be hard to add them later.
I have been fighting with these in my head for a while. I really want to create a tangible plan for the client going forward which will not only save them money but will improve the process going forward for them and reduce all this time wasted on defects and testing. My first thought is maybe creating smaller scope and as such smaller and more quicker release cycles.
Anyway does anyone have any thoughts out there or ideas? Add a comment if you do.
I have been in a recent situation where a client has decided that they wanted a separate iPhone and iPad application. Normally you would say what is hard about that?
Well they both effectively do the same thing. Also they wanted separate ratings for each one in the App store. So what we did was split out the UI and create a shared kit for calling the backend and parsing its responses. This all sounds good and maximises reuse between the two very different UI’s.
Now comes the problem. We used pre-compiler directives to set the URL’s of which environment we are calling and compile the right ones into the shared kit. Now you may ask why are we doing this?
Well the main reason is security. We don’t want to make it easy for people with jailbroken phones to change the URL’s the application calls. This can introduce things like man in the middle attacks.
So if you want to compile your code to UAT as an example you would need your root project to have a schema called UAT and your sub-project to have a schema called UAT as well. That way when you execute a build for schema UAT in the root project it will chain to sub-project schemes with the same name or use the default scheme. This can also be useful for turning on logging for all your projects too.
- Root project