I’m a PC. I’m a Mac. I’m Linux

Back in the day there were really on two or three holy wars that coders inevitably got dragged into. First there is the war of the editors: Vi(m) V. Emacs. (Sublime FTW.) Then there’s Windows V. Mac V. Linux V. *BSD. Finally, it seems that everyone has their own post on why their favorite programming language is better than X.

After my last post, I received a number of responses on twitter and here via comments about naming things like classes, objects and methods. After some conversation with other developers, I’ve discovered a new (cold) holy war among developers — Naming conventions.

A Rose Method by any other name…

There are a few well-known naming conventions for variables – Hungarian variable notation comes to mind. There are several frameworks and languages that have specific conventions as well. Rails (well, more specifically Active Record) has class and model conventions describe how you name objects and their controllers. In both cases, the conventions are there to help developers understand the meaning and purpose behind what their referring too. Hungarian notation prefixes a logical type identifier to the variable name. Rails’ conventions illustrate whether you’re dealing with an object, a database table or a model. In Salesforce, however, it seems we’re much more loosey-goosey with our naming schemes. I polled a number of developers I respect about how they named classes and method and by far, the most thought out answer I received was:

Carefully. ~Akrikos

To be honest, at first I thought his answer was a bit flippant, but he followed up by saying: “Every time I’ve named something poorly I’ve regretted it. Especially in the Salesforce world where there’s such a high cost to changing the name of something.” He’s right, of course, that there’s a high price to pay for renaming a class in Salesforce. The platform demands integrity and renaming something means touching everywhere it’s used. So then, what’s in a name?

What’s in a name?

It stands to reason then, that regardless of what conventions you adopt, you should strive to enforce them 100% consistently.

Naming a method or a class is all about assigning meaning. Conveying meaning in a limited number of characters is all about conventions. Conventions provide consistency that enable at-a-glance understanding. It stands to reason then, that regardless of what conventions you adopt, you should strive to enforce them 100% consistently. Here are some conventions — in no particular order, that I think you should adopt:

  1. Filenames should have standardized endings. This helps you quickly show the purpose of the file, and its class
    1. Test classes end in Tests.
    2. Custom Visualforce Controllers end in Ctrl
    3. Custom Visualforce Controller Extensions end in CtrlExt
  2. Bias towards longer descriptive names and not shorter ones: ThisMethodDescribesWhatItDoes() v. magic()
  3. Put reusable utility code in a utils or lib class. i.e. “CommonContactUtilities”
  4. Group related code together with an “internal namespace”. For instance, if you’ve developed a set of Visualforce pages and Extensions for feature XYZ internally namespace them like:
    1. XYZ_CustomAccountDemoCtrlExt
    2. XYZ_VisualForcePageOfAwesome
    3. XYZ_CustomAccountDemoCtrlExt_Tests
  5. Async classes should include in their name the class they’re scheduling or batching

Homage to Sandi Metz

Sandi Metz, Ruby Hero and Author of Practical Object Oriented Design in Ruby

A year or so ago, Ruby Hero and author of Practical Object-Oriented Design in Ruby Sandi Metz gave us her four rules of Ruby Development. As stated, her four rules are:

  1. Classes can be no longer than one hundred lines of code.
  2. Methods can be no longer than five lines of code.
  3. Pass no more than four parameters into a method. Hash options are parameters.
  4. Controllers can instantiate only one object. Therefore, views can only know about one instance variable and views should only send messages to that object (@object.collaborator.value is not allowed).

While her rules are rather specific to Ruby development, I believe the ideas behind them are fairly universal and useful. With that in mind, here are a set of rules for Apex development, along with the rationalization behind it.

My highly opinionated rules for Apex SimplicityastroSays2

  1. Classes can be no longer than 300 lines of code. — I took three classes from production Apex projects and ported the code to idiomatic Ruby. Each of my Apex classes used aproximately three times the number of lines that the Ruby code took, hence 300 lines. Putting a cap on the number of lines has a number of benefits, but perhaps the biggest is found by enforcing a clarity of purpose on the class. Recently, I had the opportunity to work with a 2k line class “Coupon”. It handled five types of discounts. Breaking those five discounts into five classes helps convey what class had responsibility for what.
  2. 20 lines per method — This is easily the most contentious rule. The primary benefit of following this rule is that your code will necessarily be simpler. Nested if statements, and long blocks of code for edge cases are now tucked away in other, more tightly focused — and more easily tested — methods. Like Thoughtbot’s implementation of Metz’s rules, I think this could use some explaination though, as not everything makes sense as a ‘line’. Conditional logic keywords like if/else each count as a line. The line constraint will also likely reduce your use of else if() constructs keeping your code simpler as well! So why 20 lines? Well Ruby has an implicit return structure, whereas Apex requires explicit return statements adding at least one extra line per method. Couple that with a generally more verbose language and I felt like 20 was the sweet spot. It’s a fairly arbitrary choice, I admit but in some testing I’ve found that I can usually get my methods into 20 lines (not counting comments. You are commenting right?)
  3. Only four method arguments per method — First, let me state an exception to this rule. Factory methods for producing test data are immediately exempt from this rule. For instance, a factory method for returning an Order may require passing in: a User, an Account, a Contact, an Opportunity as well as… well you get the idea. Unlike test methods, most ‘live’ methods accept parameters for one of two reasons: direct manipulation of data or conditional logic. Again, simplicity is the real reason and winner here. Limiting each method to no more than four incoming parameters forces you to have different methods for different conditional branches making your methods are short, concise and tightly focused.
  4. Visualforce Controllers can only talk to a single instance variable. I admit, I have a hard time following this one. The idea here, is to expose the entirety of your data through a single wrapper object. This forces architectural changes on down the line. Your controller or controller extension becomes focused not on gathering data, but preparing it for the view. This has a number of benefits, but code simplicity and testability are the best. Note, this doesn’t include controller methods, just the data being exposed on the page.

If you’re sensing an underlying theme to these, it’s simplicity – keeping the code as simple as possible. These rules help enforce simplicity. Of course sooner or later we’ll hit a class with 350 lines and no good refactoring path. And thats ok too, Break the rules when you have to, not just because you can.


I’ve been using the Salesforce Mobile SDK for a few years now. Every version has been a big improvement. However, the latest release — version 3.1 has a couple of noteworthy additions. So noteworthy that I want to specifically address them. While there are some technical features included — it’s now a cocoapod! — there are two non-tehcnical features I want to highlight.

Unified Application Architecture in the Mobile SDK

In their own words, Salesforce has:

unified the app architecture so that apps built with the latest SDK – hybrid or native – now have access to the same core set of functionality regardless of their target platform. All the libraries, all the APIs, and all the major mobile building blocks have consistent implementations, while at the same time, provide full access to the host operating system.

To break that down, what they’re saying in their understated way is that you can now write apps for Android, and iOS using your choice of:

  • Objective-C or Swift (with a bit more work) on iOS.
  • Java or C (with a bit more work) on Android.
  • Html5, Css andUnified Architecture FTW Javascript for both iOS and Android.

Theoretically, if you’re willing to put in some extra work, you could place the JS within an Windows 8.x phone container. Enabling hybrid Salesforce connected apps on Windows 8.x. What’s most impressive about this platform feature parity, is that all the building blocks of a connected Salesforce mobile application have consistent implementations. This allows for an unprecedented level of flexibility for developers. Developers who can now make platform choices based on business needs (I need to support iOS and Win phone 8.1) rather than what features are available (I need Smartstore, so I have to use the JS/Hybrid platform).

Docs and Examples from the Mobile SDK

The second feature of the new SDK version that I want to highlight is actually near and dear to my heart. For me, there are two equally fistsimportant forces that drive my understanding and adoption, of a new API, SDK or technology: Documentation and Example code.

Docs are a coders best friend, providing all the nitty-gritty details of how every individual function works. The Mobile SDK has a brilliant set of documentation (iOS docs are here). Without these, using the SDK would be a black-box nightmare.

On the other hand(or fist), Example code is a Noobies best friend. When you’re starting out with a new API or SDK, you need to see what is happening. Indeed, you need to see what to do more than how to do. ‘Do I login first thing? or show a settings dialog?’ Example code leads to the Eureka of ‘Oh, I see — I present the login view first.’ Moreover, example code exposes new users to what methods and functions are available! Included in version 3.1 of the SDK we find iOS and Android example applications. More importantly, we also find a hybrid (html, css, js) example. Additionally, the SDK has related examples using Polymer. Perhaps most interestingly, the developer evangelism team has example code using Angular / Ionic!

While each release of the SDK has brought new and exciting features version 3.1 brings not only features but a level platform development field and examples to bring new developers up to speed quickly. Now if only they’d support Rubymotion.

trailheadIf you’ve not yet heard about Trailhead, click here. I’ll wait. Back? Great. Trailhead is awesome, and I’ve been a big fan since it was released at Dreamforce ‘14. It’s got modules for both the clicking and the coding developers among us. Regardless of your development style, the mix of videos, how-to’s and exercises are helpful to learn and sharpen your skills. About two weeks ago, I finished all the available modules, and tweeted out that I was finished. At the time I was promised more modules and this morning I woke up to two wholly new modules and two modules with new exercises!


trailhead data security

Trailhead Data Security

Data Security is the first new module. It teaches users the finer points of Field Level Security (FLS), Sharing Rules, Role Hierarchy, and related information. Perhaps the most important section is the overview of the module. I don’t want to ruin the fun, but I will say you might want to print this off and post it on your wall: a handy infographic on data security! It should be pointed out that Data Security is possibly the single most important and complex topic covered by Trailhead, and it’s critical we all become experts on this. Also, for the record, forcing users to use a hardware token is only an organizational security feature in my head.

trailhead change management

Trailhead Change Management

Trailhead Change Management is the second new module released today. It discusses the finer points of using sandboxes, change sets and the overall workflow recommended for Salesforce feature development. While Trailhead is aimed at developers, I wish this module was somehow required homework for managers and budget creators who argue against providing developers sandboxes! Build, Test, Deploy! Build in a sandbox, deploy to production!


In addition to the two new modules, there are new challenges associated with the testing and apex modules. Interestingly, the Trailhead site now shows a preview of things to come with modules for:

  1. Asynchronous Apex
  2. Apex Integration Services
  3. Visualforce & JavaScript
  4. App Deployment

I, for one, look forward to finishing up the Data Security and Change Management modules while they prepare the new Apex modules! You can get started with Trailhead by clicking here!


A while back I stumbled across a situation where I needed to do a visualforce mail merge, but I didn’t want to send the email. Unfortunately, there’s no built in way to do that. Salesforce’s Visualforce merge code doesn’t give you a “getter” for the merge result. Instead, the normal workflow looks like this.

    Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
    String[] toAddresses = new String[]{'theDoctor@who.com'};
    Messaging.sendEmail(new Messaging.SingleEmailMessage[] {mail});

In fact, there’s not even a .merge() method exposed in Apex. The merging happens as part of Messaging.sendEmail();

However, after some research I discovered that PJC over on Stackexchange had figured out that a DB savepoint could be (ab)used to grab the template contents after merging. This is Neat(c).

Fast forward a few months, (LifeWithRyan)[http://www.sudovi.com/] and I are talking on IRC about this same problem. We agreed to both blog our solutions. His is here: (When an Email Template just isn’t enough)[http://www.sudovi.com/when-an-email-template-just-isnt-enough/] I decided to wrap the method I found up in a reusable class: MailUtils.cls. Mailutils offers a single static method. getMergedTemplateForObjectWithoutSending(Id targetObjectId, Id templateId, Boolean useSig, Boolean saveActivity, String senderDisplayName) That takes the work out of this. It returns a Map, with the following keys:
textBody: Merged text body
htmlBody: Merged html version
subject: Subject line of the email

Here’s MailUtils.cls in its full ‘glory’:

public class mailUtils {
  public class mailUtilsException extends exception {}

  public Boolean useSig {get; private set;}
  public Boolean saveActivity {get; private set;}
  public String senderDisplayName {get; private set;}

  public mailUtils(Boolean useSig, Boolean saveActivity, String senderDisplayName){
    this.useSig = usesig;
    this.saveActivity = saveActivity;
    this.senderDisplayName = senderDisplayName;

  // Derived from: 
  // http://salesforce.stackexchange.com/questions/13/using-apex-to-assemble-html-letterhead-emails/8745#8745
  public Messaging.SingleEmailMessage MergeTemplateWithoutSending(Id targetObjectId, Id templateId) {
    Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
    // Intentionally set a bogus email address.
    String[] toAddresses = new String[]{'invalid@emailaddr.es'};

    // create a save point
    Savepoint sp = Database.setSavepoint();
    // Force the merge of the template.
    Messaging.sendEmail(new Messaging.SingleEmailMessage[] {mail});
    // Force a rollback, and cancel mail send.

    // Return the mail object
    // You can access the merged template, subject, etc. via:
    // String mailTextBody = mail.getPlainTextBody();
    // String mailHtmlBody = mail.getHTMLBody();
    // String mailSubject = mail.getSubject();
    return mail;


  public static Map<String,String> getMergedTemplateForObjectWithoutSending(Id targetObjectId, Id templateId, Boolean useSig, Boolean saveActivity, String senderDisplayName) {
    Map<String,String> returnValue = new Map<String,String>();
    mailUtils mu = new mailUtils(useSig, saveActivity, senderDisplayName);
    Messaging.SingleEmailMessage mail = mu.MergeTemplateWithoutSending(targetObjectId, templateId);
    returnValue.put('textBody', mail.getPlainTextBody());
    returnValue.put('htmlBody', mail.getHTMLBody());
    returnValue.put('subject', mail.getSubject());
    return returnValue;


I never thought I’d be the kind of person to “do” a 5k. In fact, had you asked me a month ago, I’d have told you that Americans don’t understand the Metric system, and that 5k runners were clearly invading Canadians in disguise. And yet, two weekends ago I found myself standing at the starting line of the Santa Paws 5k surrounded by 300 invading Canadians and their four-legged best friends. Santa Paws is the annual fundraiser for the Wake SPCA, here in North Carolina. Of course, I couldn’t just walg (walk-jog) the 5k, I had to turn it into an excuse to play with new technology!

The Quantified Self.

For awhile now I’ve been fascinated by the idea of the Quantified Self. In short, the idea is to log one’s activities, locations and events. The hope is that more data can lead to better decisions. Some of this has been going on for decades. Keeping one’s day-planner, or diary updated is likely the analog beginnings of the Quantified Self. The innovation of our generation is found in passive collection. Put a Fitbit, Up (or soon, Apple Watch) on your wrist and your daily sleep and activity is automatically recorded. In the age smartphones with motion co-processors and clever apps we can even move beyond automatically gathering data, to proactively prompting actions. Everyday, ’round about 5pm RunKeeper lovingly(annoyingly?) tells me “Remember, you used to think this was a fine time to go walk? Want to go for one now?” Our access to technology has raised the bar from passive collection of data, to actively prompting and encouraging better decisions. (And yes, even I have to admit that more activity is a better decision for me.)

The Problem

Kevin and Lilo rest after the finish of the SantaPaws 5k

Kevin and Lilo rest after the finish of the SantaPaws 5k

Lilo is my adorable 45lb Carolina dog. She’s ferociously smart, loves Babyfriar and is the secret to cheap, boundless energy — if we could only figure out how to harness it. I’ve worried that she’s not as active as she needs to be because her brothers are decidedly lower energy critters. She needs more exercise. Right about the time I signed up to do Santa Paws with Lilo, I found out about Whistle, wifi enabled ‘Fitbit’ for dogs. Whistle uses a Bluetooth connection to your phone to identify who the Pup is with during activities and can differentiate between a walk, playtime and sleeping-in-the-most-uncomfortable-looking-position-on-the-couch. A quick trip to the pet store and Lilo was Whistle’d. Not only are we quantifying ourselves, but our best friends as well. Indeed, with the RunKeeper app providing information like distance walked, and the current and average pace while recording GPS locations; it’s possible to not only quantify any given walk, but also use that data to, for instance, slowly increase pace and distance for training purposes. The problem, however, is that the RunKeeper data is siloed off in the RunKeeper app while the Whistle data is hidden off inside the Whistle app.

Enter Salesforce

I believe Salesforce provides an ideal platform for self quantification. It’s api’s provide a rich environment for integrating and aggregating data from a myriad of sources. And so a project was born: A real-time updating map display of Lilo and my progress on the 5k course with GPS data from RunKeeper, step information from Fitbit and activity information from Whistle.

Building it out

As with many a side project, the design was simple on paper, but proved rather challenging to implement. I wanted an app that would:

  • Collect data from Fitbit
  • Collect data from RunKeeper
  • Collect data from Whistle
  • Display the data on a Map
  • Update the map in realtime as new measurements are recorded by RunKeeper
  • Expose all this in a nice way to my supporters so they could see the race progress.

The design

My starting, back of the napkin design had me using the Streaming api to deliver updates to a public, authentication-free force.com site. Unfortunately, I quickly discovered I can’t use the streaming api on an unauthenticated site — sadness — so instead I wired it to periodically pull new records via javascript. I also discovered that, despite my best intents, hooking Runkeeper, Fitbit and Whistle all up was a too-tall task for the 6 days I gave myself between hatching the thought and actually walking the 5k. What I ended up with was an app that received, and interpreted GPS and walk data from RunKeeper and displayed it in near-realtime on a map. You can view a replay of our 5k walk here.

The data is populated from RunKeeper via a Heroku based Rails middleware app. As I have time, I’ll flesh that out with data from Fitbit annotating each marker with the number of steps taken since the last marker. Unfortunately, Whistle has declined (as of yet) to publicize an API, leading to a rather hacktastic, unofficial api that is currently broken (or my charles-proxy-fu is weak.)


Ever look at a cloud and think “Oh, Hey, it’s a [Star Destroyer | Tardis | Puppy eating a Tardis! ]?” If so, you’ve experienced apophenia; the experience of identifying patterns or meaning in seemingly random data. Our brains are really good at finding such patterns, but the information isn’t always presented in ways we can easily process. This is where analytics steps in, helping us visualize data and in so doing, understand what it all means.

This is the real reason the Salesforce1 is such a great aggregation platform. It’s suite of analytics tools — from reports, dashboards and the analytics api that can power D3 based visualizations like the one’s Christophe Coenraets has blogged about — are unparalleled. After we’ve moved to 30 hour days, and I have more time, I’ll update the maps page to show some D3 based analytics showing the number of steps and elevation gain. That’s a start on visualizing how my pace slows per % of grade uphill and demonstrates that running downhill after your dog is still faster than tripping and rolling downhill after your dog.

A Dreamforce ’14 Hack

About a year ago, I was privileged to be able to participate in one of Apple’s iOS Dev Day conferences in New York City. Tickets were emailed out as Passbook passes, and as I approached the registration desk on the day of the event, my phone magically alerted me, and pulled up the ticket pass. Later that day, Apple’s Dev Evangelism Team explained that they’d built out the ticket passes with iBeacon technology. Their registration computers were broadcasting a Bluetooth 4.0 signal, and all the ticket holders with the pass in Passbook would automatically listen for a specific Bluetooth “beacon” and notify us when we came within range of the beacon. Ever since that day, I’ve been experimenting with Passbook, Passes and Beacons.

_codefriarBefore Dreamforce this year, I decided I wanted to find a way to harness Passes and Beacons to meet as many of my twitter friends, fellow devs and the technologically curious as I could. In the end, I created a proximity-aware, “socially-viral” e-business card that, through the power of Passbook, alerted anyone when they came within beacon range of me.

A Pass Primer

The language surrounding iBeacons, Passbook and Passes is a bit befuddling, so let’s look at all the moving pieces here:

  • Passes: Passes can be one of a number of things: Loyalty Cards, Event Tickets, Bus Passes, etc. The over-arching idea is that a Pass represents access to something. Passes are, from a technical standpoint a zip file containing a signed .json file and a set of images. Importantly, a Pass is a standard!
  • Passbook: Passbook is an application included in iOS since v7.0 that is used to capture, display and store Passes. Because a Pass is a standard, there are numerous Passbook-like applications for Android and Windows Phone’s Wallet app supports them as well.
  • iBeacon: iBeacon is the Apple name for a Bluetooth 4.0 (or Bluetooth Low Energy) transmitter broadcasting 3 specific pieces of information:
    1. UUID – A 32-digit string uniquely identifying the beacon(s) used for a given purpose. There can be many beacons with the same UUID, but all beacons sharing a given UUID should be for the same purpose or from the same organization.
    2. Major value – This is an integer value used to group like beacons within a geographical area.
    3. Minor value – This is an integer value used to differentiate beacons with the same UUID / Major value.

Use Cases

The UUID/Major/Minor can be confusing, so here’s two examples of where you might have UUID/Major shared amongst several beacons.

Imagine you’re the CIO of a chain of supermarkets. You want to place beacons around your stores to advertise produce, steak, dry goods and dairy specials. Rather than assigning different UUID, Major and Minor numbers for every beacon in your stores, you can set them up so that your UUID is shared amongst all your stores, the Major # represents a single store id and the Minor # represents a particular area of your store. Set up this way, you could identify which stores are having more beacon hits than others and, if you store timestamps, extrapolate general flow-paths customers take through your store. This would allow you, on a per-store basis, design marketing and sales materials in the “highly visited” portions of your store.

On the other hand, say you’re a vendor at a large trade show with 400 other vendors struggling for the attention of the 145,000 attendees. You want to drive as much traffic to your booth as possible. Traditionally, you could accomplish this with unique, killer swag like quad-copters, skateboards and faux-pro cameras. On the other hand, you could establish a network of beacons sharing the same UUID, and Major number that act as way-points within the conference hall to help attendees find your booth. Attendees who’s phones have hit all the waypoints get the killer swag. Make it a game, a scavenger hunt to drive visitation at a collection of booths. The UUID would reference the conference, the Major # the vendor and the Minor # the waypoint or scavenger hunt step.

Regardless of the use case, there’s a singular challenge to utilizing Beacons to broadcast proximity awareness: Your end-user must have an App, or Pass installed on the device. In my case, I tweeted the pass’s installation URL prior to Dreamforce, and set the pass up to display a bar code that Passbook (though sadly not any of the android apps I tried) could scan-to-install. While a seemingly significant hurdle, almost 1,500 people installed the pass before the end of Dreamforce with just a bit of advertising. App-based distribution of proximity alerts is potentially much higher. For instance, were Salesforce to build beacon awareness into the Dreamforce app, virtually all attendees would have access.

How to build your own

To distribute the pass itself, and to provide a bit of insight as to where people were snagging the pass from, I built a simple Rails app. As I mentioned earlier, the Pass is nothing more than a JSON file, and some images that are signed and zipped. To accomplish the signing and zipping of the Pass, I used the excellent passbook gem. I’ve put the source of the Rails app up on BitBucket.

The important operational portion of the application is the app/controllers/pass_controller.rb file, which has an admittedly ugly HEREDOC containing the JSON needed for the Pass.

The JSON holds everything from my Name to the beacons object that defines which beacon(s) UUID/Major/Minor it should respond to. A single pass can define multiple beacons to respond to! If you want to clone this and make your own e-biz card, note that you’ll need to modify the beacons object with your own UUID/Major/Minor and update the images.

A few other objects of note in the JSON are the “Generic” object and the “backfields” object. These objects contain the key-value pairs for the information you want to display either on the front (generic) or back (backfields) of your pass. If you’re creating other kinds of passes these fields will be different.

This Rails app is deployable to Heroku, and is setup to geolocate the IP’s of pass installations. One interesting note, I expected a fairly even distribution across the world for pass downloads but discovered that Phone carriers tend to terminate their mobile data connections in a few select cities. Check out this map to see what I mean:


This morning a friend asked for the low-down on Salesforce, SSLv3, Poodle and what a Callout was. She was the fourth such person to ask about this, and I decided a quick primer on internet communication might help. The following isn’t meant to be the most technically correct set of definitions, glossing over many details to provide a high-level, non-coder overview.

Computers on the internet communicate with each other using a set of protocols. You can think of a protocol as a sort of rigid dialect of a given language. In general, these protocols are described and written out as “TCP/IP” which stands in typical geek-un-original-naming-conventions: “Transmission Control Protocol / Internet Protocol.” These protocols do the bulk of the work for sending data across the wires and through the tubes. They handle the mundane communication “conversations” that might look something like this:

Computer1: “Hey, You there, out in California. Sup?”

Computer2: “Hit me with some mad data yo.”

Computer1: “Ok, here’s this ultra-important tweet @codefriar wants to post”


Computer2: “Got it. Thanks yo. Tell @codefriar 201”

In the beginning was TCP/IP and other protocols you’ll recognize. Ever seen HTTP:// ? FTP:// ? These are data protocols that define how a web page, or a file’s data is transmitted. If you’ll permit me an analogy from Taco-hell, Internet communication is not unlike a 7 layer burrito. HTTP layered on top of TCP/IP etc. Even as TCP/IP + HTTP does the vast bulk of the work, as the internet has grew up, we consumers decided sending our credit cards to vendors unencrypted was a “bad idea”(tm). In response some wicked smart, and well meaning fellows at Netscape (remember them?) developed this thing called Secure Socket Layer, or SSL. SSL is an optional layer designed to sit between TCP/IP and HTTP. A long time ago (10 years ago, no kidding) SSL was replaced with TLS, or Transport Layer Security. SSL and it’s replacement TLS function by establishing a protocol-like communication between two computers that looks something like this:

Computer1: Hi, my user asked me to talk to you, but I don’t trust the internet; because internet. So if you don’t mind, tell me who you are, and tell me what encryption schemes you speak. I’m going to start our negotiations with TLS1.2.

Computer2: Uh, due to a network glitch, old hardware, old software, or just because I’m grouchy, I’m going to offer TLS1.0.

Computer1: Ugh, stupid computer, I guess TLS1.0 will work. Now lets create a one-time encryption key for this session that only you and I will know about.

Computer2: Sure, though I think your attitude towards my “enterprise” (ie: out of date) TLS version is quite rude. Here’s my Public key, and an one-time key. <key data>

Computer1: “enterprise my ass”, I’ll accept the key.


Computer1: kthxbai

Any further communication between the two computers is then encrypted with that session specific key. This is a “Good Thing”(tm).

The important part here is that the two computers negotiate which encryption scheme to use. As you can imagine, the computers try to negotiate the highest level of encryption they both support.

Here’s where the POODLEs come in. Some very smart, well meaning encryption gurus at Google found out that computers can be fooled into negotiating to a less-secure version of encryption and that the less-secure encryption used is, well, in a word useless. POODLE is the name the Google researchers gave their exploit. In their own words POODLE results in:

…there is no reasonable workaround. This leaves us with no secure SSL 3.0 cipher suites at all: to achieve secure encryption, SSL 3.0 must be avoided entirely.

(Emphasis mine).  Poodle is dangerous precisely because the encryption methods offered by SSLv3 are weak enough that a “bad person”(tm) could listen in to communications and steal information. (jerks.)

Now, lets put some legs on this set of concepts. If you want to buy something online, your computer is going to initiate that encryption-version-detection-dance. If you’re buying from a major vendor online, say one based in the lovely land of Washington, you’ll find that their computers will not accept SSL v3.0 because that would be insecure. This is good and wonderful thing.

On the other hand, lets say you’re a company that provides a Platform for software development. As part of that platform, you allow your developers to make “callouts” to other internet based services. First, what do I mean by callout? Simply put a callout is anytime the platform initiates communication with a non-platform server. In other words, anytime you ask the platform to “call” out to another computer. As you can imagine, these callouts are SSL enabled, meaning that whenever possible communication between the platform and the external computer are encrypted. Unfortunately, this also means if the computer that is called out to negotiates the encryption down to SSLv3, well, it’s effectively unencrypted. This is a “Bad Thing”.(tm)

Now, to be even more specific, this means that:

  • If your Salesforce org communicates with any-other internet connected computer because you’ve asked it to talk to your Sharepoint server. (note: Sharepoint is just an example and I cannot speak to the myriad of complex configuration mistakes that could exist and cause a Sharepoint service to degrade to SSLv3)
  • If that computer has SSLv3 enabled
  • If the Encryption scheme negotiation is, for whatever reason, forced to degrade to SSLV3

Then, your communication is effectively unencrypted. If an attacker were sufficiently motivated they can get at your data.

Here’s the nasty catch: If either side has disabled SSLv3, and the encryption negotiation cannot settle on a version of TLS, the entire call will fail, because not making the call is preferable to making a call that everyone can read… This means if your Sharepoint server’s admin has disabled SSLv3, but for whatever reason Salesforce cannot negotiate TLS1.2 with your Sharepoint server, the communication will stop, and the callout will fail because no suitable encryption scheme can be negotiated. This means updates to Sharepoint may start failing, for instance.

In a perfect world, all computers would be upgraded in such a way that prevented SSLv3 from being used. Importantly, if only one side of the communication prohibits SSLv3 and the two computers are able to negotiate a higher level of encryption this isn’t an issue. If you own the server(s) being called out to, you can work to ensure you properly accept TLS1.2.

Or you can wait until Salesforce stops allowing SSLv3 on their end… On 12/20/2014

Either way, SSLv3 should be disabled!

What is eval()?

Eval is a common method in programming languages that allows the developer to do some Metaprogramming. I’m sure that answer actually raised more questions than it answered, so let’s take a step back and talk about how computers interpret our code.

Whether at compile or runtime, the programming language itself is responsible for translating human readable code into something the computer can do. What differs amongst languages is the grammar the human readable code takes.

Some languages are “highly dynamic” while others are … well, less dynamic. The how’s and what’s of defining “dynamic” are both a controversy in its own right and far beyond the pay grade of this blog post, so let me just speak about one of the banner features of dynamic languages: Metaprogramming.

Remember Inception? Like Inception, Metaprogramming is a bit of a mind bender, but the essence of Metaprogramming is that instead of writing code to solve one problem, developers instead write code that solves many problems; or, as I like to think of it – developers write code that writes code on the fly.

The idea behind Eval() is to have the compiler or interpreter of the language take a string of text and read and interpret it as if it were actually code. If you’re not a coder, you may still be waiting for the punch line; what makes this all very important is that, as coders, we can create that string programmatically, mixing in variables for class names, values, etc. This allows for highly dynamic software that, in effect is capable of writing itself.

Why Eval()?

On the Salesforce1 platform, we essentially have two programming languages available to us: Apex, and Javascript. Javascript is considered a dynamic language; Apex not so much. This is demonstrated by the fact that Javascript provides an Eval() method where as Apex, on the other hand, does not. Additionally, Javascript is only available within the browser – so we cannot utilize it’s eval() method for Apex based API integrations. So why create an Apex Eval() method? Well the idea hit me when I was trying to find a way to parse JEXL expression strings in Apex.

variable1 eq '1' or AwsomeVar eq '1' or AwesomeSauce eq '1' or BowTiesAreCool eq '1' or theDoctor eq '1'

JEXL, which you can see in all its glory above, is basically a programming language unto itself. I would receive these JEXL statements from an API and I needed to evaluate the expressions for true or false. I knew I could pretty easily build a map of JEXL variable names to Apex variable names, and likewise replace the operands like eq into something like this:

variable1__c == true || AwsomeVar__c eq == true || AwesomeSauce__c == true || BowTiesAreCool__c == true || theDoctor__c == true

Wrap that in an IF() statement and we’re off to the races. Here is where Eval() comes in handy. With Eval() I can pass in that translated string, and evaluate it within an if statement. Using Eval() like this means that whenever the integrated API changes a validation JEXL string, my integration can automatically reflect that validation change.

How Eval()?

So how do we create an Eval() method? Salesforce provides us with a REST based Tooling API that exposes the Execute Anonymous method. Utilizing the tooling API’s rest access to (securely) call Execute Anonymous allows us to pass a string of code in, and have it evaluated as if we were using the developer console’s Execute Anonymous window. Note, this means there are two requirements for Apex Eval() to work: API access (Sorry PE), and setting up Remote Site in your org that allows you to call out to your instance of Salesforce. I.e. na4.salesforce.com or cs3.salesforce.com. Once you’ve met those two requirements, we’ll utilize the excellent apex-toolingapi library for calling the tooling api. Because Apex is a typed language, our Eval methods will need to return a specific type. In my original use case, I wanted to know the Eval’d result of a Boolean expression. To do so, I created the Dynamic class, with the following method:

[gist https://gist.github.com/noeticpenguin/cd457c5b969b48b1f28a]

I’m using an exception so that I can capture and return typed data from the exec anonymous call. This allows us to catch only a particular type of exception, in this case IntentionalException for success use cases, while still retaining the ability for our anonymous executed code to throw a different kind of exception if needed. I’ll leave it as an exercise for the reader to build out other types of Eval methods.

So there you have it Eval(), a.k.a. Execute Anonymous, within a typed generally non-dynamic language. Please use this for good, and remember you will incur rest api call cost when using this.

Recently I was introduced to an interesting TED talk, by Jackson Katz. In his talk (which you can find here) he makes quite a few valid, and interesting points. But, for me, the most interesting thing he talks about is what I’m going to call the Bystander Protocol. Katz says that:

A bystander is defined as anybody who is not a perpetrator or a victim in a given situation, so in other words friends, teammates, colleagues, coworkers, family members, those of us who are not directly involved in a dyad of abuse, but we are embedded in social, family, work, school, and other peer culture relationships with people who might be in that situation.

Katz is specifically speaking about abuse in his talk. I think too often we hear or read “abuse” and understand it to mean physical, sexual, verbal or psychological abuse. While those forms of abuse must be addressed, they are blessedly not the most common forms of abuse. I don’t mean to downplay, in any way, these forms of abuse. Indeed, I think there’s a more pervasive form of abuse that is particularly prevalent in the technology sector. By ignorance or malice (honestly, I don’t care which) I believe we as a society tend to use language –metaphors, words and idioms– that cull our imaginations and those of our listeners and readers. Sexist language is, I believe, especially prevalent in the technology sector.

I’m sure we can all easily find examples of overt sexism in the technology sector. Earlier this year, this happened:

Sadly this slide praises only the physical attributes of the metaphor (looks beautiful) and denigrates the personality and intellectual attributes. Thankfully, within a few short hours there was a prompt and complete apology. But, as one commentator pointed out, the fact that no one thought to talk the speaker out of this metaphor belies the underlying problem: No one caught it ahead of time because we’re not self-aware of the issues enough to catch them.

More than overt sexism in language, I feel like we use gendered pronouns and gendered examples in our talks, blog posts and even example code. I imagine it’s hard to hear “Women in Technology YAY!” from corporations, and read “Your developer can do X if he chooses.” At the very least it’s inconsiderate. Again, I doubt many people regardless of gender intentionally choose to be exclusive with their pronouns and language; but I do think it’s pervasive.

As we approach Dreamforce ’14 I’m reminded of our industry’s history with sexism and struck by the simplicity of Katz’s action point:

What do we do? How do we speak up? How do we challenge our friends? How do we support our friends? But how do we not remain silent in the face of abuse?

(Emphasis mine). I think the answer lies in the Bystander Protocol. As Bystanders, we’re present and able to speak truth to power gently and positively. I believe we, as Dreamforce Attendees, can and should expect our speakers (myself included) to not only avoid overt sexism, but exclusive language in general. I don’t imagine this should work in an aggressive, confrontational manner. When presented with gender-specific speech, or even language that presumes gender norms, we can (and should) politely, calmly ask the speaker to consider other language.

I believe we should pledge to actively participate in conversations as Bystanders; using neither sexist and exclusive language nor permitting such speech to go unchallenged. Let’s actively strive towards a culture of accountability and acceptance by doing something. None of us could hope to change the whole of the tech sector’s misogynistic culture by ourselves. No one can do it alone, but we can’t stand by in silence. As Bystanders at the world’s largest cloud computing conference we have the opportunity and responsibility to do that something by speaking out whenever we find hateful or even careless speech.

In the end, what will hurt the most is not the words of our enemies but the silence of our friends. ~ Martin Luther King Jr.

I want to challenge you my fellow speakers and attendees to Dreamforce ’14 to Pledge to do just that. Tweet with the hashtag #df14Bystander to take the pledge to speak out when needed, to politely ask questions of leaders and speakers who use exclusive language, to report overt sexist language, and to avoid the use of such speech yourself. Use “developers”, “devs”, “admins”, “we”, or “they” instead of “him” or “her” in your talks. Lets make this the tech conference where Women in Technology isn’t about the latest sexist faux-paux, but how women are presumed equal and capable. Wouldn’t that be a news blurb for @Salesforce to press release?