Trailhead.

Trailhead is by far and away my favorite learning tool for Salesforce development. The combination of guided tutorials with hands-on, in-org development is unbeatable. On top of this, the guided tutorials have built-in direct feedback to help. Until recently, the state of the art for teaching programming languages was books. The best we had for checking our work was compiler and interpreter errors. I’ve waxed philosophical about Trailhead before, but it’s unparalleled as a teaching tool. Recently a Lightning Dev week participant asked me what my favorite Trailhead module was? After some consideration, I think my favorite module is…

Apex Testing.

trailhead_apexTestingI spend a lot of time on the developer forums and Salesforce Stack Exchange. I think it’s safe to say that the majority of questions are about testing. There’s the classic “will you write my tests for me.” The philosophical “why must I write unit tests?” But my favorite is still, how can I increase my test coverage for this code? Trailhead’s Apex Testing module, while it doesn’t cover everything, is a great start.

The module’s components.

The Apex Testing module has three components that build on each other. First, it starts with an overview of unit testing basics like assertions. This component tops off with a practical test — write a unit test for a given class. To pass the challenge, you have to meet 100% test coverage. The challenge of 100% test coverage reinforces several core ideas. Most importantly it enforces the practice of testing all the logical paths in your code.

Unit Testing Apex Triggers.

Because Triggers execute in response to DML events, unit tests for triggers have to contain assertions and DML statements. While orgs have to maintain 75% aggregate average code coverage, all triggers have to have some coverage. This has the practical effect of a many more testing questions related to Triggers. Learning to test triggers in Trailhead serves not just to train developers but to advance the community. It does this by decreasing the number of routine “how do I write a test for this trigger?” questions on the forums.

Creating Test Data.

Of all the components in the Trailhead Apex Testing module I find the last one most valuable. In this last component readers learn why it’s important to create your own test data. This is more than just a practical matter and the key to why this is a hidden gem. Testing your code is arguably more important than actually writing it. While a majority of us wouldn’t neglect objects or other code dependencies; Data dependencies are often overlooked. Learning to write proper tests means learning to write code that fulfills all dependencies. By fulfilling all the dependencies and writing proper tests, developers gain the confidence that their test is valid not only today, but next week and next release!

A good three base hit.

Trailhead’s Apex Testing covers three of the most important aspects of unit Testing. While Trailhead takes the time to teach the basics of unit testing I believe it misses two key facets. First, and most importantly, it doesn’t address the importance testing different users and scenarios. Specifically, I want Trailhead to teach developers to write tests that:

  1. Test the “expected” behavior — so-called “positive” test cases. These tests pass in expected input and test for expected output. One positive test case for each logical branch.
  2. Test expected Exceptions — so-called “Negative” test cases. These tests pass in invalid or otherwise faulty data into the unit of code. Negative tests assert that the code threw an exception. Bonus points for tests that assert a specific type of exception and it’s message.
  3. Test the code with various user roles and permissions. Can the code handle execution with a non-sysadmin?

Each of these test types safeguards against common exception cases. By testing more of these common cases, we gain more confidence in the robustness of our code. Perhaps Trailhead will expand to teach these three test types. (Dear Trailhead team, if you’d like help with that, tweet me!)

Secondly, I wish Trailhead also discussed HTTP callout tests. More and more of our work as Salesforce developers involves integrations. These integrations often take the form of api integrations through HTTP callouts. Testing callouts requires knowledge of the HttpCalloutMock interface and using the Test.startTest() and Test.stopTest() methods.

Regardless of these two sticky bits, I think this module is the best one out there. Testing is one of those key skills that every Salesforce developer has to master. Trailhead provides not only knowledge transfer but skill practice and evaluation — a combination that can’t be beat — especially for testing.

Salesforce Communities sometimes have to be Debugged.

Recently I had the opportunity to do some communities work. On the whole, I love the communities product. From an end-user perspective it can’t be beat. As a developer, but, I’ve hit a few snags debugging Visualforce pages in communities. During normal Visualforce development, you can rely on the platform to surface errors. Usually this takes the form of a nice ugly error message. Now ugly error messages are the epitome of terrible user experiences. So it’s understandable that Salesforce would prevent them from appearing to users.

Debugging Communities with uglyErrorMessage

Behold, an ugly error message in it’s uninformative native habitat!

Unfortunately, Salesforce replaces Visualforce errors with a different, but far less informative error message. This error displays only for Visualforce errors – not Apex errors. In fact, two conditions must be met for this kind of error. First, a page’s controller or controller extension(s) must instantiate without error. Secondly, there must be some kind of rendering error. What do I mean by “rendering error”? Well, in my case it was a view state size error. When these types of errors happen, Salesforce hides the error behind this lovely message.

When I first started experiencing this bug, I started with the usual debugging techniques. I tried a few different users. I tried adding users to debug logs. I made sure that logs were appearing in the dev console and read every line. I saw my controller firing up and completing without a single error. Yet the error page plagued me. After some discussion with the IRC #salesforce community I discovered a method on the PageReference class called getContent. getContent() returns the rendered content of the page, including error messages. Most importantly, getContent() returns errors rendered at the Visualforce level. This allows us to capture the error before Salesforce neatly hides our error. It’s possible to construct a page that attempts to render the content of any other Visualforce page in a try/catch block. When the page catches an error, it’s displayed.

Debugging Communities with a Visualforce wrapper

To help others with debugging communities based Visualforce errors, I’ve developed a reusable Visualforce page called CommunityDebugger, and its corresponding controller CommunityDebbugerCtrl. Here’s the controller code:

And the Visualforce page:

To use this page to debug a community page:

  1. Deploy the controller and page to your sandbox or developer org. You shouldn’t deploy this to production.

  2. Visit your silently failing Community url, say cs4.force.com/awesomeCommunity/failingPage?param1=foo&param2=theDoctor

  3. Prepend “CommunityDebugger?page=” before the your failing page. Like this: cs4.force.com/awesomeCommunity/CommunityDebugger?page=failingPage&param1=foo&param2=theDoctor

  4. When the page loads, your error will be rendered for you to see, like this:

System.VisualforceException : Id value is not valid for the User standard controller : Class.CommunityDebuggerCtrl.fetchFailingPage: line 23, column 1

And that, my fellow developers, is an error message you can act on. For the record, the bug that started this all? My view state was too big. Transient FTW.

I’m a PC. I’m a Mac. I’m Linux

Back in the day there were really on two or three holy wars that coders inevitably got dragged into. First there is the war of the editors: Vi(m) V. Emacs. (Sublime FTW.) Then there’s Windows V. Mac V. Linux V. *BSD. Finally, it seems that everyone has their own post on why their favorite programming language is better than X.

After my last post, I received a number of responses on twitter and here via comments about naming things like classes, objects and methods. After some conversation with other developers, I’ve discovered a new (cold) holy war among developers — Naming conventions.

A Rose Method by any other name…

There are a few well-known naming conventions for variables – Hungarian variable notation comes to mind. There are several frameworks and languages that have specific conventions as well. Rails (well, more specifically Active Record) has class and model conventions describe how you name objects and their controllers. In both cases, the conventions are there to help developers understand the meaning and purpose behind what their referring too. Hungarian notation prefixes a logical type identifier to the variable name. Rails’ conventions illustrate whether you’re dealing with an object, a database table or a model. In Salesforce, however, it seems we’re much more loosey-goosey with our naming schemes. I polled a number of developers I respect about how they named classes and method and by far, the most thought out answer I received was:

Carefully. ~Akrikos

To be honest, at first I thought his answer was a bit flippant, but he followed up by saying: “Every time I’ve named something poorly I’ve regretted it. Especially in the Salesforce world where there’s such a high cost to changing the name of something.” He’s right, of course, that there’s a high price to pay for renaming a class in Salesforce. The platform demands integrity and renaming something means touching everywhere it’s used. So then, what’s in a name?

What’s in a name?

It stands to reason then, that regardless of what conventions you adopt, you should strive to enforce them 100% consistently.

Naming a method or a class is all about assigning meaning. Conveying meaning in a limited number of characters is all about conventions. Conventions provide consistency that enable at-a-glance understanding. It stands to reason then, that regardless of what conventions you adopt, you should strive to enforce them 100% consistently. Here are some conventions — in no particular order, that I think you should adopt:

  1. Filenames should have standardized endings. This helps you quickly show the purpose of the file, and its class
    1. Test classes end in Tests.
    2. Custom Visualforce Controllers end in Ctrl
    3. Custom Visualforce Controller Extensions end in CtrlExt
  2. Bias towards longer descriptive names and not shorter ones: ThisMethodDescribesWhatItDoes() v. magic()
  3. Put reusable utility code in a utils or lib class. i.e. “CommonContactUtilities”
  4. Group related code together with an “internal namespace”. For instance, if you’ve developed a set of Visualforce pages and Extensions for feature XYZ internally namespace them like:
    1. XYZ_CustomAccountDemoCtrlExt
    2. XYZ_VisualForcePageOfAwesome
    3. XYZ_CustomAccountDemoCtrlExt_Tests
  5. Async classes should include in their name the class they’re scheduling or batching

Homage to Sandi Metz

Sandi Metz, Ruby Hero and Author of Practical Object Oriented Design in Ruby

A year or so ago, Ruby Hero and author of Practical Object-Oriented Design in Ruby Sandi Metz gave us her four rules of Ruby Development. As stated, her four rules are:

  1. Classes can be no longer than one hundred lines of code.
  2. Methods can be no longer than five lines of code.
  3. Pass no more than four parameters into a method. Hash options are parameters.
  4. Controllers can instantiate only one object. Therefore, views can only know about one instance variable and views should only send messages to that object (@object.collaborator.value is not allowed).

While her rules are rather specific to Ruby development, I believe the ideas behind them are fairly universal and useful. With that in mind, here are a set of rules for Apex development, along with the rationalization behind it.

My highly opinionated rules for Apex SimplicityastroSays2

  1. Classes can be no longer than 300 lines of code. — I took three classes from production Apex projects and ported the code to idiomatic Ruby. Each of my Apex classes used aproximately three times the number of lines that the Ruby code took, hence 300 lines. Putting a cap on the number of lines has a number of benefits, but perhaps the biggest is found by enforcing a clarity of purpose on the class. Recently, I had the opportunity to work with a 2k line class “Coupon”. It handled five types of discounts. Breaking those five discounts into five classes helps convey what class had responsibility for what.
  2. 20 lines per method — This is easily the most contentious rule. The primary benefit of following this rule is that your code will necessarily be simpler. Nested if statements, and long blocks of code for edge cases are now tucked away in other, more tightly focused — and more easily tested — methods. Like Thoughtbot’s implementation of Metz’s rules, I think this could use some explaination though, as not everything makes sense as a ‘line’. Conditional logic keywords like if/else each count as a line. The line constraint will also likely reduce your use of else if() constructs keeping your code simpler as well! So why 20 lines? Well Ruby has an implicit return structure, whereas Apex requires explicit return statements adding at least one extra line per method. Couple that with a generally more verbose language and I felt like 20 was the sweet spot. It’s a fairly arbitrary choice, I admit but in some testing I’ve found that I can usually get my methods into 20 lines (not counting comments. You are commenting right?)
  3. Only four method arguments per method — First, let me state an exception to this rule. Factory methods for producing test data are immediately exempt from this rule. For instance, a factory method for returning an Order may require passing in: a User, an Account, a Contact, an Opportunity as well as… well you get the idea. Unlike test methods, most ‘live’ methods accept parameters for one of two reasons: direct manipulation of data or conditional logic. Again, simplicity is the real reason and winner here. Limiting each method to no more than four incoming parameters forces you to have different methods for different conditional branches making your methods are short, concise and tightly focused.
  4. Visualforce Controllers can only talk to a single instance variable. I admit, I have a hard time following this one. The idea here, is to expose the entirety of your data through a single wrapper object. This forces architectural changes on down the line. Your controller or controller extension becomes focused not on gathering data, but preparing it for the view. This has a number of benefits, but code simplicity and testability are the best. Note, this doesn’t include controller methods, just the data being exposed on the page.

If you’re sensing an underlying theme to these, it’s simplicity – keeping the code as simple as possible. These rules help enforce simplicity. Of course sooner or later we’ll hit a class with 350 lines and no good refactoring path. And thats ok too, Break the rules when you have to, not just because you can.

I’ve been using the Salesforce Mobile SDK for a few years now. Every version has been a big improvement. However, the latest release — version 3.1 has a couple of noteworthy additions. So noteworthy that I want to specifically address them. While there are some technical features included — it’s now a cocoapod! — there are two non-tehcnical features I want to highlight.

Unified Application Architecture in the Mobile SDK

In their own words, Salesforce has:

unified the app architecture so that apps built with the latest SDK – hybrid or native – now have access to the same core set of functionality regardless of their target platform. All the libraries, all the APIs, and all the major mobile building blocks have consistent implementations, while at the same time, provide full access to the host operating system.

To break that down, what they’re saying in their understated way is that you can now write apps for Android, and iOS using your choice of:

  • Objective-C or Swift (with a bit more work) on iOS.
  • Java or C (with a bit more work) on Android.
  • Html5, Css andUnified Architecture FTW Javascript for both iOS and Android.

Theoretically, if you’re willing to put in some extra work, you could place the JS within an Windows 8.x phone container. Enabling hybrid Salesforce connected apps on Windows 8.x. What’s most impressive about this platform feature parity, is that all the building blocks of a connected Salesforce mobile application have consistent implementations. This allows for an unprecedented level of flexibility for developers. Developers who can now make platform choices based on business needs (I need to support iOS and Win phone 8.1) rather than what features are available (I need Smartstore, so I have to use the JS/Hybrid platform).

Docs and Examples from the Mobile SDK

The second feature of the new SDK version that I want to highlight is actually near and dear to my heart. For me, there are two equally fistsimportant forces that drive my understanding and adoption, of a new API, SDK or technology: Documentation and Example code.

Docs are a coders best friend, providing all the nitty-gritty details of how every individual function works. The Mobile SDK has a brilliant set of documentation (iOS docs are here). Without these, using the SDK would be a black-box nightmare.

On the other hand(or fist), Example code is a Noobies best friend. When you’re starting out with a new API or SDK, you need to see what is happening. Indeed, you need to see what to do more than how to do. ‘Do I login first thing? or show a settings dialog?’ Example code leads to the Eureka of ‘Oh, I see — I present the login view first.’ Moreover, example code exposes new users to what methods and functions are available! Included in version 3.1 of the SDK we find iOS and Android example applications. More importantly, we also find a hybrid (html, css, js) example. Additionally, the SDK has related examples using Polymer. Perhaps most interestingly, the developer evangelism team has example code using Angular / Ionic!

While each release of the SDK has brought new and exciting features version 3.1 brings not only features but a level platform development field and examples to bring new developers up to speed quickly. Now if only they’d support Rubymotion.

trailheadIf you’ve not yet heard about Trailhead, click here. I’ll wait. Back? Great. Trailhead is awesome, and I’ve been a big fan since it was released at Dreamforce ‘14. It’s got modules for both the clicking and the coding developers among us. Regardless of your development style, the mix of videos, how-to’s and exercises are helpful to learn and sharpen your skills. About two weeks ago, I finished all the available modules, and tweeted out that I was finished. At the time I was promised more modules and this morning I woke up to two wholly new modules and two modules with new exercises!

trailhead data security

Trailhead Data Security

Data Security is the first new module. It teaches users the finer points of Field Level Security (FLS), Sharing Rules, Role Hierarchy, and related information. Perhaps the most important section is the overview of the module. I don’t want to ruin the fun, but I will say you might want to print this off and post it on your wall: a handy infographic on data security! It should be pointed out that Data Security is possibly the single most important and complex topic covered by Trailhead, and it’s critical we all become experts on this. Also, for the record, forcing users to use a hardware token is only an organizational security feature in my head.

trailhead change management

Trailhead Change Management

Trailhead Change Management is the second new module released today. It discusses the finer points of using sandboxes, change sets and the overall workflow recommended for Salesforce feature development. While Trailhead is aimed at developers, I wish this module was somehow required homework for managers and budget creators who argue against providing developers sandboxes! Build, Test, Deploy! Build in a sandbox, deploy to production!

In addition to the two new modules, there are new challenges associated with the testing and apex modules. Interestingly, the Trailhead site now shows a preview of things to come with modules for:

  1. Asynchronous Apex
  2. Apex Integration Services
  3. Visualforce & JavaScript
  4. App Deployment

I, for one, look forward to finishing up the Data Security and Change Management modules while they prepare the new Apex modules! You can get started with Trailhead by clicking here!

A while back I stumbled across a situation where I needed to do a visualforce mail merge, but I didn’t want to send the email. Unfortunately, there’s no built in way to do that. Salesforce’s Visualforce merge code doesn’t give you a “getter” for the merge result. Instead, the normal workflow looks like this.

    Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
    String[] toAddresses = new String[]{'theDoctor@who.com'};
    mail.setToAddresses(toAddresses);
    mail.setUseSignature(this.useSig);
    mail.setSaveAsActivity(this.saveActivity);
    mail.setSenderDisplayName(this.senderDisplayName);
    mail.setTargetObjectId(targetObjectId);
    mail.setTemplateId(templateId);
    Messaging.sendEmail(new Messaging.SingleEmailMessage[] {mail});

In fact, there’s not even a .merge() method exposed in Apex. The merging happens as part of Messaging.sendEmail();

However, after some research I discovered that PJC over on Stackexchange had figured out that a DB savepoint could be (ab)used to grab the template contents after merging. This is Neat(c).

Fast forward a few months, (LifeWithRyan)[http://www.sudovi.com/] and I are talking on IRC about this same problem. We agreed to both blog our solutions. His is here: (When an Email Template just isn’t enough)[http://www.sudovi.com/when-an-email-template-just-isnt-enough/] I decided to wrap the method I found up in a reusable class: MailUtils.cls. Mailutils offers a single static method. getMergedTemplateForObjectWithoutSending(Id targetObjectId, Id templateId, Boolean useSig, Boolean saveActivity, String senderDisplayName) That takes the work out of this. It returns a Map, with the following keys:
textBody: Merged text body
htmlBody: Merged html version
subject: Subject line of the email

Here’s MailUtils.cls in its full ‘glory’:

public class mailUtils {
  public class mailUtilsException extends exception {}

  public Boolean useSig {get; private set;}
  public Boolean saveActivity {get; private set;}
  public String senderDisplayName {get; private set;}

  public mailUtils(Boolean useSig, Boolean saveActivity, String senderDisplayName){
    this.useSig = usesig;
    this.saveActivity = saveActivity;
    this.senderDisplayName = senderDisplayName;
  }

  // Derived from: 
  // http://salesforce.stackexchange.com/questions/13/using-apex-to-assemble-html-letterhead-emails/8745#8745
  public Messaging.SingleEmailMessage MergeTemplateWithoutSending(Id targetObjectId, Id templateId) {
    Messaging.reserveSingleEmailCapacity(1);
    Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
    // Intentionally set a bogus email address.
    String[] toAddresses = new String[]{'invalid@emailaddr.es'};
    mail.setToAddresses(toAddresses);
    mail.setUseSignature(this.useSig);
    mail.setSaveAsActivity(this.saveActivity);
    mail.setSenderDisplayName(this.senderDisplayName);
    mail.setTargetObjectId(targetObjectId);
    mail.setTemplateId(templateId);

    // create a save point
    Savepoint sp = Database.setSavepoint();
    // Force the merge of the template.
    Messaging.sendEmail(new Messaging.SingleEmailMessage[] {mail});
    // Force a rollback, and cancel mail send.
    Database.rollback(sp);

    // Return the mail object
    // You can access the merged template, subject, etc. via:
    // String mailTextBody = mail.getPlainTextBody();
    // String mailHtmlBody = mail.getHTMLBody();
    // String mailSubject = mail.getSubject();
    return mail;

  }

  public static Map<String,String> getMergedTemplateForObjectWithoutSending(Id targetObjectId, Id templateId, Boolean useSig, Boolean saveActivity, String senderDisplayName) {
    Map<String,String> returnValue = new Map<String,String>();
    mailUtils mu = new mailUtils(useSig, saveActivity, senderDisplayName);
    Messaging.SingleEmailMessage mail = mu.MergeTemplateWithoutSending(targetObjectId, templateId);
    returnValue.put('textBody', mail.getPlainTextBody());
    returnValue.put('htmlBody', mail.getHTMLBody());
    returnValue.put('subject', mail.getSubject());
    return returnValue;
  }

}

I never thought I’d be the kind of person to “do” a 5k. In fact, had you asked me a month ago, I’d have told you that Americans don’t understand the Metric system, and that 5k runners were clearly invading Canadians in disguise. And yet, two weekends ago I found myself standing at the starting line of the Santa Paws 5k surrounded by 300 invading Canadians and their four-legged best friends. Santa Paws is the annual fundraiser for the Wake SPCA, here in North Carolina. Of course, I couldn’t just walg (walk-jog) the 5k, I had to turn it into an excuse to play with new technology!

The Quantified Self.

For awhile now I’ve been fascinated by the idea of the Quantified Self. In short, the idea is to log one’s activities, locations and events. The hope is that more data can lead to better decisions. Some of this has been going on for decades. Keeping one’s day-planner, or diary updated is likely the analog beginnings of the Quantified Self. The innovation of our generation is found in passive collection. Put a Fitbit, Up (or soon, Apple Watch) on your wrist and your daily sleep and activity is automatically recorded. In the age smartphones with motion co-processors and clever apps we can even move beyond automatically gathering data, to proactively prompting actions. Everyday, ’round about 5pm RunKeeper lovingly(annoyingly?) tells me “Remember, you used to think this was a fine time to go walk? Want to go for one now?” Our access to technology has raised the bar from passive collection of data, to actively prompting and encouraging better decisions. (And yes, even I have to admit that more activity is a better decision for me.)

The Problem

Kevin and Lilo rest after the finish of the SantaPaws 5k

Kevin and Lilo rest after the finish of the SantaPaws 5k

Lilo is my adorable 45lb Carolina dog. She’s ferociously smart, loves Babyfriar and is the secret to cheap, boundless energy — if we could only figure out how to harness it. I’ve worried that she’s not as active as she needs to be because her brothers are decidedly lower energy critters. She needs more exercise. Right about the time I signed up to do Santa Paws with Lilo, I found out about Whistle, wifi enabled ‘Fitbit’ for dogs. Whistle uses a Bluetooth connection to your phone to identify who the Pup is with during activities and can differentiate between a walk, playtime and sleeping-in-the-most-uncomfortable-looking-position-on-the-couch. A quick trip to the pet store and Lilo was Whistle’d. Not only are we quantifying ourselves, but our best friends as well. Indeed, with the RunKeeper app providing information like distance walked, and the current and average pace while recording GPS locations; it’s possible to not only quantify any given walk, but also use that data to, for instance, slowly increase pace and distance for training purposes. The problem, however, is that the RunKeeper data is siloed off in the RunKeeper app while the Whistle data is hidden off inside the Whistle app.

Enter Salesforce

I believe Salesforce provides an ideal platform for self quantification. It’s api’s provide a rich environment for integrating and aggregating data from a myriad of sources. And so a project was born: A real-time updating map display of Lilo and my progress on the 5k course with GPS data from RunKeeper, step information from Fitbit and activity information from Whistle.

Building it out

As with many a side project, the design was simple on paper, but proved rather challenging to implement. I wanted an app that would:

  • Collect data from Fitbit
  • Collect data from RunKeeper
  • Collect data from Whistle
  • Display the data on a Map
  • Update the map in realtime as new measurements are recorded by RunKeeper
  • Expose all this in a nice way to my supporters so they could see the race progress.

The design

My starting, back of the napkin design had me using the Streaming api to deliver updates to a public, authentication-free force.com site. Unfortunately, I quickly discovered I can’t use the streaming api on an unauthenticated site — sadness — so instead I wired it to periodically pull new records via javascript. I also discovered that, despite my best intents, hooking Runkeeper, Fitbit and Whistle all up was a too-tall task for the 6 days I gave myself between hatching the thought and actually walking the 5k. What I ended up with was an app that received, and interpreted GPS and walk data from RunKeeper and displayed it in near-realtime on a map. You can view a replay of our 5k walk here.

The data is populated from RunKeeper via a Heroku based Rails middleware app. As I have time, I’ll flesh that out with data from Fitbit annotating each marker with the number of steps taken since the last marker. Unfortunately, Whistle has declined (as of yet) to publicize an API, leading to a rather hacktastic, unofficial api that is currently broken (or my charles-proxy-fu is weak.)

Apophenia

Ever look at a cloud and think “Oh, Hey, it’s a [Star Destroyer | Tardis | Puppy eating a Tardis! ]?” If so, you’ve experienced apophenia; the experience of identifying patterns or meaning in seemingly random data. Our brains are really good at finding such patterns, but the information isn’t always presented in ways we can easily process. This is where analytics steps in, helping us visualize data and in so doing, understand what it all means.

This is the real reason the Salesforce1 is such a great aggregation platform. It’s suite of analytics tools — from reports, dashboards and the analytics api that can power D3 based visualizations like the one’s Christophe Coenraets has blogged about — are unparalleled. After we’ve moved to 30 hour days, and I have more time, I’ll update the maps page to show some D3 based analytics showing the number of steps and elevation gain. That’s a start on visualizing how my pace slows per % of grade uphill and demonstrates that running downhill after your dog is still faster than tripping and rolling downhill after your dog.

A Dreamforce ’14 Hack

About a year ago, I was privileged to be able to participate in one of Apple’s iOS Dev Day conferences in New York City. Tickets were emailed out as Passbook passes, and as I approached the registration desk on the day of the event, my phone magically alerted me, and pulled up the ticket pass. Later that day, Apple’s Dev Evangelism Team explained that they’d built out the ticket passes with iBeacon technology. Their registration computers were broadcasting a Bluetooth 4.0 signal, and all the ticket holders with the pass in Passbook would automatically listen for a specific Bluetooth “beacon” and notify us when we came within range of the beacon. Ever since that day, I’ve been experimenting with Passbook, Passes and Beacons.

_codefriarBefore Dreamforce this year, I decided I wanted to find a way to harness Passes and Beacons to meet as many of my twitter friends, fellow devs and the technologically curious as I could. In the end, I created a proximity-aware, “socially-viral” e-business card that, through the power of Passbook, alerted anyone when they came within beacon range of me.

A Pass Primer

The language surrounding iBeacons, Passbook and Passes is a bit befuddling, so let’s look at all the moving pieces here:

  • Passes: Passes can be one of a number of things: Loyalty Cards, Event Tickets, Bus Passes, etc. The over-arching idea is that a Pass represents access to something. Passes are, from a technical standpoint a zip file containing a signed .json file and a set of images. Importantly, a Pass is a standard!
  • Passbook: Passbook is an application included in iOS since v7.0 that is used to capture, display and store Passes. Because a Pass is a standard, there are numerous Passbook-like applications for Android and Windows Phone’s Wallet app supports them as well.
  • iBeacon: iBeacon is the Apple name for a Bluetooth 4.0 (or Bluetooth Low Energy) transmitter broadcasting 3 specific pieces of information:
    1. UUID – A 32-digit string uniquely identifying the beacon(s) used for a given purpose. There can be many beacons with the same UUID, but all beacons sharing a given UUID should be for the same purpose or from the same organization.
    2. Major value – This is an integer value used to group like beacons within a geographical area.
    3. Minor value – This is an integer value used to differentiate beacons with the same UUID / Major value.

Use Cases

The UUID/Major/Minor can be confusing, so here’s two examples of where you might have UUID/Major shared amongst several beacons.

Imagine you’re the CIO of a chain of supermarkets. You want to place beacons around your stores to advertise produce, steak, dry goods and dairy specials. Rather than assigning different UUID, Major and Minor numbers for every beacon in your stores, you can set them up so that your UUID is shared amongst all your stores, the Major # represents a single store id and the Minor # represents a particular area of your store. Set up this way, you could identify which stores are having more beacon hits than others and, if you store timestamps, extrapolate general flow-paths customers take through your store. This would allow you, on a per-store basis, design marketing and sales materials in the “highly visited” portions of your store.

On the other hand, say you’re a vendor at a large trade show with 400 other vendors struggling for the attention of the 145,000 attendees. You want to drive as much traffic to your booth as possible. Traditionally, you could accomplish this with unique, killer swag like quad-copters, skateboards and faux-pro cameras. On the other hand, you could establish a network of beacons sharing the same UUID, and Major number that act as way-points within the conference hall to help attendees find your booth. Attendees who’s phones have hit all the waypoints get the killer swag. Make it a game, a scavenger hunt to drive visitation at a collection of booths. The UUID would reference the conference, the Major # the vendor and the Minor # the waypoint or scavenger hunt step.

Regardless of the use case, there’s a singular challenge to utilizing Beacons to broadcast proximity awareness: Your end-user must have an App, or Pass installed on the device. In my case, I tweeted the pass’s installation URL prior to Dreamforce, and set the pass up to display a bar code that Passbook (though sadly not any of the android apps I tried) could scan-to-install. While a seemingly significant hurdle, almost 1,500 people installed the pass before the end of Dreamforce with just a bit of advertising. App-based distribution of proximity alerts is potentially much higher. For instance, were Salesforce to build beacon awareness into the Dreamforce app, virtually all attendees would have access.

How to build your own

To distribute the pass itself, and to provide a bit of insight as to where people were snagging the pass from, I built a simple Rails app. As I mentioned earlier, the Pass is nothing more than a JSON file, and some images that are signed and zipped. To accomplish the signing and zipping of the Pass, I used the excellent passbook gem. I’ve put the source of the Rails app up on BitBucket.

The important operational portion of the application is the app/controllers/pass_controller.rb file, which has an admittedly ugly HEREDOC containing the JSON needed for the Pass.

The JSON holds everything from my Name to the beacons object that defines which beacon(s) UUID/Major/Minor it should respond to. A single pass can define multiple beacons to respond to! If you want to clone this and make your own e-biz card, note that you’ll need to modify the beacons object with your own UUID/Major/Minor and update the images.

A few other objects of note in the JSON are the “Generic” object and the “backfields” object. These objects contain the key-value pairs for the information you want to display either on the front (generic) or back (backfields) of your pass. If you’re creating other kinds of passes these fields will be different.

This Rails app is deployable to Heroku, and is setup to geolocate the IP’s of pass installations. One interesting note, I expected a fairly even distribution across the world for pass downloads but discovered that Phone carriers tend to terminate their mobile data connections in a few select cities. Check out this map to see what I mean:

Pass_Map___E-Card_Passbook_Server

This morning a friend asked for the low-down on Salesforce, SSLv3, Poodle and what a Callout was. She was the fourth such person to ask about this, and I decided a quick primer on internet communication might help. The following isn’t meant to be the most technically correct set of definitions, glossing over many details to provide a high-level, non-coder overview.

Computers on the internet communicate with each other using a set of protocols. You can think of a protocol as a sort of rigid dialect of a given language. In general, these protocols are described and written out as “TCP/IP” which stands in typical geek-un-original-naming-conventions: “Transmission Control Protocol / Internet Protocol.” These protocols do the bulk of the work for sending data across the wires and through the tubes. They handle the mundane communication “conversations” that might look something like this:

Computer1: “Hey, You there, out in California. Sup?”

Computer2: “Hit me with some mad data yo.”

Computer1: “Ok, here’s this ultra-important tweet @codefriar wants to post”

<data>

Computer2: “Got it. Thanks yo. Tell @codefriar 201″

In the beginning was TCP/IP and other protocols you’ll recognize. Ever seen HTTP:// ? FTP:// ? These are data protocols that define how a web page, or a file’s data is transmitted. If you’ll permit me an analogy from Taco-hell, Internet communication is not unlike a 7 layer burrito. HTTP layered on top of TCP/IP etc. Even as TCP/IP + HTTP does the vast bulk of the work, as the internet has grew up, we consumers decided sending our credit cards to vendors unencrypted was a “bad idea”(tm). In response some wicked smart, and well meaning fellows at Netscape (remember them?) developed this thing called Secure Socket Layer, or SSL. SSL is an optional layer designed to sit between TCP/IP and HTTP. A long time ago (10 years ago, no kidding) SSL was replaced with TLS, or Transport Layer Security. SSL and it’s replacement TLS function by establishing a protocol-like communication between two computers that looks something like this:

Computer1: Hi, my user asked me to talk to you, but I don’t trust the internet; because internet. So if you don’t mind, tell me who you are, and tell me what encryption schemes you speak. I’m going to start our negotiations with TLS1.2.

Computer2: Uh, due to a network glitch, old hardware, old software, or just because I’m grouchy, I’m going to offer TLS1.0.

Computer1: Ugh, stupid computer, I guess TLS1.0 will work. Now lets create a one-time encryption key for this session that only you and I will know about.

Computer2: Sure, though I think your attitude towards my “enterprise” (ie: out of date) TLS version is quite rude. Here’s my Public key, and an one-time key. <key data>

Computer1: “enterprise my ass”, I’ll accept the key.

<data>

Computer1: kthxbai

Any further communication between the two computers is then encrypted with that session specific key. This is a “Good Thing”(tm).

The important part here is that the two computers negotiate which encryption scheme to use. As you can imagine, the computers try to negotiate the highest level of encryption they both support.

Here’s where the POODLEs come in. Some very smart, well meaning encryption gurus at Google found out that computers can be fooled into negotiating to a less-secure version of encryption and that the less-secure encryption used is, well, in a word useless. POODLE is the name the Google researchers gave their exploit. In their own words POODLE results in:

…there is no reasonable workaround. This leaves us with no secure SSL 3.0 cipher suites at all: to achieve secure encryption, SSL 3.0 must be avoided entirely.

(Emphasis mine).  Poodle is dangerous precisely because the encryption methods offered by SSLv3 are weak enough that a “bad person”(tm) could listen in to communications and steal information. (jerks.)

Now, lets put some legs on this set of concepts. If you want to buy something online, your computer is going to initiate that encryption-version-detection-dance. If you’re buying from a major vendor online, say one based in the lovely land of Washington, you’ll find that their computers will not accept SSL v3.0 because that would be insecure. This is good and wonderful thing.

On the other hand, lets say you’re a company that provides a Platform for software development. As part of that platform, you allow your developers to make “callouts” to other internet based services. First, what do I mean by callout? Simply put a callout is anytime the platform initiates communication with a non-platform server. In other words, anytime you ask the platform to “call” out to another computer. As you can imagine, these callouts are SSL enabled, meaning that whenever possible communication between the platform and the external computer are encrypted. Unfortunately, this also means if the computer that is called out to negotiates the encryption down to SSLv3, well, it’s effectively unencrypted. This is a “Bad Thing”.(tm)

Now, to be even more specific, this means that:

  • If your Salesforce org communicates with any-other internet connected computer because you’ve asked it to talk to your Sharepoint server. (note: Sharepoint is just an example and I cannot speak to the myriad of complex configuration mistakes that could exist and cause a Sharepoint service to degrade to SSLv3)
  • If that computer has SSLv3 enabled
  • If the Encryption scheme negotiation is, for whatever reason, forced to degrade to SSLV3

Then, your communication is effectively unencrypted. If an attacker were sufficiently motivated they can get at your data.

Here’s the nasty catch: If either side has disabled SSLv3, and the encryption negotiation cannot settle on a version of TLS, the entire call will fail, because not making the call is preferable to making a call that everyone can read… This means if your Sharepoint server’s admin has disabled SSLv3, but for whatever reason Salesforce cannot negotiate TLS1.2 with your Sharepoint server, the communication will stop, and the callout will fail because no suitable encryption scheme can be negotiated. This means updates to Sharepoint may start failing, for instance.

In a perfect world, all computers would be upgraded in such a way that prevented SSLv3 from being used. Importantly, if only one side of the communication prohibits SSLv3 and the two computers are able to negotiate a higher level of encryption this isn’t an issue. If you own the server(s) being called out to, you can work to ensure you properly accept TLS1.2.

Or you can wait until Salesforce stops allowing SSLv3 on their end… On 12/20/2014

Either way, SSLv3 should be disabled!