This morning a friend asked for the low-down on Salesforce, SSLv3, Poodle and what a Callout was. She was the fourth such person to ask about this, and I decided a quick primer on internet communication might help. The following isn’t meant to be the most technically correct set of definitions, glossing over many details to provide a high-level, non-coder overview.

Computers on the internet communicate with each other using a set of protocols. You can think of a protocol as a sort of rigid dialect of a given language. In general, these protocols are described and written out as “TCP/IP” which stands in typical geek-un-original-naming-conventions: “Transmission Control Protocol / Internet Protocol.” These protocols do the bulk of the work for sending data across the wires and through the tubes. They handle the mundane communication “conversations” that might look something like this:

Computer1: “Hey, You there, out in California. Sup?”

Computer2: “Hit me with some mad data yo.”

Computer1: “Ok, here’s this ultra-important tweet @codefriar wants to post”

<data>

Computer2: “Got it. Thanks yo. Tell @codefriar 201”

In the beginning was TCP/IP and other protocols you’ll recognize. Ever seen HTTP:// ? FTP:// ? These are data protocols that define how a web page, or a file’s data is transmitted. If you’ll permit me an analogy from Taco-hell, Internet communication is not unlike a 7 layer burrito. HTTP layered on top of TCP/IP etc. Even as TCP/IP + HTTP does the vast bulk of the work, as the internet has grew up, we consumers decided sending our credit cards to vendors unencrypted was a “bad idea”(tm). In response some wicked smart, and well meaning fellows at Netscape (remember them?) developed this thing called Secure Socket Layer, or SSL. SSL is an optional layer designed to sit between TCP/IP and HTTP. A long time ago (10 years ago, no kidding) SSL was replaced with TLS, or Transport Layer Security. SSL and it’s replacement TLS function by establishing a protocol-like communication between two computers that looks something like this:

Computer1: Hi, my user asked me to talk to you, but I don’t trust the internet; because internet. So if you don’t mind, tell me who you are, and tell me what encryption schemes you speak. I’m going to start our negotiations with TLS1.2.

Computer2: Uh, due to a network glitch, old hardware, old software, or just because I’m grouchy, I’m going to offer TLS1.0.

Computer1: Ugh, stupid computer, I guess TLS1.0 will work. Now lets create a one-time encryption key for this session that only you and I will know about.

Computer2: Sure, though I think your attitude towards my “enterprise” (ie: out of date) TLS version is quite rude. Here’s my Public key, and an one-time key. <key data>

Computer1: “enterprise my ass”, I’ll accept the key.

<data>

Computer1: kthxbai

Any further communication between the two computers is then encrypted with that session specific key. This is a “Good Thing”(tm).

The important part here is that the two computers negotiate which encryption scheme to use. As you can imagine, the computers try to negotiate the highest level of encryption they both support.

Here’s where the POODLEs come in. Some very smart, well meaning encryption gurus at Google found out that computers can be fooled into negotiating to a less-secure version of encryption and that the less-secure encryption used is, well, in a word useless. POODLE is the name the Google researchers gave their exploit. In their own words POODLE results in:

…there is no reasonable workaround. This leaves us with no secure SSL 3.0 cipher suites at all: to achieve secure encryption, SSL 3.0 must be avoided entirely.

(Emphasis mine).  Poodle is dangerous precisely because the encryption methods offered by SSLv3 are weak enough that a “bad person”(tm) could listen in to communications and steal information. (jerks.)

Now, lets put some legs on this set of concepts. If you want to buy something online, your computer is going to initiate that encryption-version-detection-dance. If you’re buying from a major vendor online, say one based in the lovely land of Washington, you’ll find that their computers will not accept SSL v3.0 because that would be insecure. This is good and wonderful thing.

On the other hand, lets say you’re a company that provides a Platform for software development. As part of that platform, you allow your developers to make “callouts” to other internet based services. First, what do I mean by callout? Simply put a callout is anytime the platform initiates communication with a non-platform server. In other words, anytime you ask the platform to “call” out to another computer. As you can imagine, these callouts are SSL enabled, meaning that whenever possible communication between the platform and the external computer are encrypted. Unfortunately, this also means if the computer that is called out to negotiates the encryption down to SSLv3, well, it’s effectively unencrypted. This is a “Bad Thing”.(tm)

Now, to be even more specific, this means that:

  • If your Salesforce org communicates with any-other internet connected computer because you’ve asked it to talk to your Sharepoint server. (note: Sharepoint is just an example and I cannot speak to the myriad of complex configuration mistakes that could exist and cause a Sharepoint service to degrade to SSLv3)
  • If that computer has SSLv3 enabled
  • If the Encryption scheme negotiation is, for whatever reason, forced to degrade to SSLV3

Then, your communication is effectively unencrypted. If an attacker were sufficiently motivated they can get at your data.

Here’s the nasty catch: If either side has disabled SSLv3, and the encryption negotiation cannot settle on a version of TLS, the entire call will fail, because not making the call is preferable to making a call that everyone can read… This means if your Sharepoint server’s admin has disabled SSLv3, but for whatever reason Salesforce cannot negotiate TLS1.2 with your Sharepoint server, the communication will stop, and the callout will fail because no suitable encryption scheme can be negotiated. This means updates to Sharepoint may start failing, for instance.

In a perfect world, all computers would be upgraded in such a way that prevented SSLv3 from being used. Importantly, if only one side of the communication prohibits SSLv3 and the two computers are able to negotiate a higher level of encryption this isn’t an issue. If you own the server(s) being called out to, you can work to ensure you properly accept TLS1.2.

Or you can wait until Salesforce stops allowing SSLv3 on their end… On 12/20/2014

Either way, SSLv3 should be disabled!

It’s that time of year again. All the good developers and all the good admins eagerly awaiting the end of planned maintenance and the new gifts, er, features, that Salesforce is providing. At 340 pages, the Release notes are a great balance of detail-without-being-boring and I highly encourage everyone to read through them. If, however, you don’t happen to have an adorable screaming infant providing you with extra reading time between 2-4am have no fear; I written up a few highlights. I don’t want to let all the cats out of the bag, but suffice it to say, there’s The Good, The Bad and The Ugly. Without further ado:

The good.

  1. Our leader here, is innoculously described as “Speed Up Queries with the Query Plan Tool” (see Page 241-ff.) In essence, this is the Salesforce equivelent of MySql’s EXPLAIN, PostgreSQL’s EXPLAIN ANALYZE or Oracle’s EXPLAIN PLAN functionality. If you’ve never had the pleasure of arguing with a relational database query written by the intern… well, you may not know about explain. In general these tools all work the same way – prepend any given query with the keyword(s) EXPLAIN and the database will return information about how it will gather the information your looking for instead of the actual query results. Here’s why you need this: You and I both put our pants on one leg at a time, but I’ve writen queries against objects with more than 30 Million records, and I say all our SOQL queries should be reviewed with this explain tool. With this tool we can see which, if any indexes the query optimizer is able to utilize. Here’s how SOQL’s explain works:

[code lang=text]
{
"plans" : [ {
"cardinality" : 2843473,
"fields" : [ ],
"leadingOperationType" : "TableScan",
"relativeCost" : 1.7425881237364873,
"sobjectCardinality" : 25849751,
"sobjectType" : "Awesome_Sauce__c"
} ]
}
[/code]

As they say in the hood, “that there query sucks”. See that “LeadingOperationType” key in the JSON results? TableScan means it has to scan every record. ow. I should really refactor that query so that explain identifies fields it can index off of. With Sumemr’14 there’s a spiffy dev console button to access this information. Wicked.

Other good highlights include:

  1. The ability to override remote object methods
  2. Pricebook Entries in tests. Without “SeeAllData=true”, aka “DISASTERHERE=true”
  3. Un-restricted describes. If you build dyamic UI’s this is indespensible!

The Bad.

  1. There’s an aside on page 191 that bodes ill for many of us. If you’ve ever put Javascript in a home page component, start heeding their warning now. After Summer ’15 no more JS in homepage components. Convert to the new Visualforce component, or suffer the wrath of progress.

The Ugly.

Ok, I can’t really blame Salesforce for this, but the simple fact of the matter is that not all Salesforce devs are created equal. As a Salesforce consultant and developer I have inherited a number of orgs plagued with test classes that execute code, but make no assertions.

As a developer, I understand the importants of testing code, and believe that we should always write useful tests. Additionally, I know Salesforce runs the unit tests in our orgs before every release. Without assertions, however, these test runs tell us only that the code runs, not that it’s functioning properly. While there are rarely, if ever, technological solutions to social problems — like the lack of rigor and professionalism with regard to testing amongst Salesforce developer– I believe it is in the best interest of not only Salesforce Developers but also Salesforce itself, to build a feature allowing administrators to engage an org-wide flag requiring all test methods to call assert methods, with sane protections against such clear abuses as System.Asert(true);

This can only result in better testing, and therefore better code in production, as well as better feedback to Salesforce about the viablity of new API versions.

You should vote for this idea here:

https://success.salesforce.com/ideaView?id=08730000000l6zHAAQ

A little background.

CometD LogoRecently I was working on a Salesforce app that interacts with a third party api. In our case, users utilize Salesforce to sell complex digital products served by a remote fulfillment platform. Unfortunately, the remote API wasn’t designed with Salesforce in mind. As a result simple-sounding business processes required multiple api calls. The sheer number of calls needed made direct callouts impractical. To overcome this we built a middleware application hosted on Heroku. We intentionally architected our middleware so a single Salesforce callout could trigger the process. In response to the callout, our middleware application uses the rest API to call back into Salesforce and gather all the needed data. Then it makes API calls as needed to push that data to the client’s proprietary fulfillment platform. To ensure the Salesforce user isn’t waiting for a page to load the middleware app works Asynchronously. Unfortunately, this also complicates success and failure messaging to the Salesforce user. This is where the Streaming API comes into play. Using the streaming API we can show realtime success and error notifications from our Middleware to the Salesforce user.

Enter the Streaming API.

If you’re not familiar with it, Salesforce introduced the streaming API a few releases ago and is one of the most powerful additions to the Salesforce platform. Here’s how it works: As a developer, you establish a “Push Topic”. PushTopics take the form of a PushTopic object record. PushTopic records have a few key fields; namely:

  • Query, which holds a string representation of a Soql query
  • notifyForOperationCreate, if true insert dml calls will trigger a push event
  • notifyForOperationUpdate, if true update dml calls will trigger a push event
  • notifyForOperationDelete, if true delete dml calls will trigger a push event
  • notifyForOperationUndelete, if true undelete dml calls will trigger a push event

These fields, are all boolean fields. If set to true, any corresponding DML statement who’s data matches your query will result in the API pushing that record. For instance, if you’ve saved your push topic record with:

notifyForFieldOperationCreate=true
query='SELECT ID, Name, MailingAddress FROM Account'

Putting it all together – The middleware changes

With our Api integration example we need to make a change to our middleware to enable notifications. Likewise, inside our Salesforce app, we’ll need to do two things:

  • Establish a push topic.
  • Edit our Visualforce page to subscribe to the push topic and display the notifications.

Lets start with the middlware modifications. Our middleware application returns final results to Salesforce by creating Audit_Log__c records. As originally designed, it’s setup to create an audit log only at the end of the process. If we want to see immediate results, however, we’ll need to extend our middleware to create multiple Audit_Log__c records — one per step in the process. The key this integration then, is to ensure our Audit_Log__c records trigger our push topic. In our case the solution is to create new Salesforce audit logs records logging the results for each step of the process. Each of these records logs the action taken, whether it succeeded, and what, if any, error messages were returned.

VisualForce changes

With our middleware setup to log individual events, we can turn our attention back to Salesforce. First we need to establish a PushTopic record. The easiest way to create a PushTopic is to use the Developer console. Open up the dev console and then click on the Debug menu and choose “Open Anonymous Apex” window. This anonymous apex window allows us to execute small bits of code without having to generate a full class. Copy and Paste this code sample to your Anonymous Apex window:

PushTopic pushTopic = new PushTopic();
pushTopic.Name = 'ExternalAPINotifications';
pushTopic.Query = 'SELECT Id, Name, Action__c FROM API_Audit_Log__c';
pushTopic.ApiVersion = 30.0;
pushTopic.NotifyForOperationCreate = true;
pushTopic.NotifyForOperationUpdate = false;
pushTopic.NotifyForOperationUndelete = false;
pushTopic.NotifyForOperationDelete = false;
pushTopic.NotifyForFields = 'Referenced';
insert pushTopic;

Click execute, and your anonymous apex window should disappear. If you see a Success message in the log window, move on!

Within our Visualforce page, we have a bit more work to do. Essentially, we need to incorporate a few Javascript libraries and display the results. To do this, we’ll need to:

  • create a Static resource bundle
  • load a few javascript files on our visualforce page
  • add some markup to display
  • write a javascript callback
  • add a filter

While Salesforce handles the work of streaming the data; to display it we’ll need to subscribe to our pushTopic. To subscribe we use the cometd javascript library. Cometd is a javascript implementation of the Bayeux protocol, which the streaming API uses. Using this library, along with jQuery and a helper library for JSON we can subscribe with a single line of code.

$.cometd.subscribe('/topic/ExternalAPINotifications', function(message) {...}

But lets not get ahead of ourselves. First, lets create a static resource. Static resources are created by uploading zip files to Salesforce. For more information on creating Static resources see this helpful document. I’ve created a helpful zipfile containing all the libraries you’ll need to use the Streaming api here: https://www.dropbox.com/s/4r6hwtr3xvpyp6z/StreamingApi.resource.zip Once you’ve uploaded that static resource, open up your Visualforce page, and add these lines at the top:

<!-- Streaming API Libraries -->
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/jquery-1.5.1.js')}"/>
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/cometd.js')}"/>
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/json2.js')}"/>
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/jquery.cometd.js')}"/>

These lines tell Visualforce to include the javascript you need on your page.

The Final Countdown!

In order for the streaming API to add HTML segments to our page whenever the API fires a PushTopic, we’ll need to put a div on our page. Where is largely up to you, but I tend to try and keep my messaging at the top of the page. This is similar with how Salesforce does their own validation messaging etc. Wherever you decide to put it, put a div tag, and give it the id of “apiMessages” Something like this will do nicely:

<div id="apiMessages"></div> <!-- This Div is for use with the streaming Api. Removing this div hurts kittens. -->

Then at the bottom of your page’s markup, find the ending </apex:page> tag. Just above that tag, place a new script tag block like this:

<script type="text/javascript">
</script>

Inside this script block, we’re going to subscribe to our pushTopic and setup how our data looks when presented. To start, lets create a jQuery on document ready handler like this:

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      // Everything is Awesome Here. Here we can do stuff. Stuff that makes our bosses go "whoa!"
    });
  })(jQuery);
</script>

All this can look a bit intimidating but code inside this block will run when the browser signals that the document is ready. It’s in here that we want to initialize our Cometd connection to the Streaming API and do something with our data. The Cometd library we’re using is implemented as a callback system, so we need to write a callback function that outputs our data to the screen. But first, let’s hook up Cometd to the Streaming API.

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      $.cometd.init({ // <-- That line invokes the cometd library.
        // This next line snags the current logged in users' server instance: ie https://na5.salesforce.com and attaches the comet endpoint to it.
        url: window.location.protocol+'//'+window.location.hostname+'/cometd/24.0/',
        // Always vigilant with security, Salesforce makes us Authenticate our cometd usage. Here we set the oAuth token! Don't forget this step!
        requestHeaders: { Authorization: 'OAuth {!$Api.Session_ID}'}
      });
    });
  })(jQuery);
</script>

A couple of important notes here. The url and request headers are identical, regardless of org. Astute observers will note that we’re letting Visualforce substitute in actual API session credentials. This means that the Streaming API is following Salesforce security. If you can’t see the streamed object normally, you won’t be able to see it here.

Once we’ve setup the connection, we can establish the subscription. As before, it’s a simple one-liner addition to our code.

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      $.cometd.init({
        url: window.location.protocol+'//'+window.location.hostname+'/cometd/24.0/',
        requestHeaders: { Authorization: 'OAuth {!$Api.Session_ID}'}
      });
      // **** this is the crucial bit that changes per use case! ****
      $.cometd.subscribe('/topic/ExternalAPINotifications', function(message) {...});
    });
  })(jQuery);
</script>

The subscribe method accepts two parameters. The first is the text representation of the stream to subscribe to. It’s always to going to start with ‘/topic/’. The second is a callback function to be executed whenever data is received. In case you’re new to the Javascript or Asynchronous development community a Callback is a method executed whenever a given event occurs, or another method completes and calls it.

In our example above, we’re creating an anonymous function that accepts a single argument – message. message is a javascript object an id available to the body of our function. Within this function you can do anything that Javascript allows, from alert(); calls to appending objects to the Dom tree. Functionally, appending elements to the dom is the most practical so lets build that out. Remeber the div we created a few steps back? The one with the Id “apiMessages”? Lets put that to work.

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      $.cometd.init({
        url: window.location.protocol+'//'+window.location.hostname+'/cometd/24.0/',
        requestHeaders: { Authorization: 'OAuth {!$Api.Session_ID}'}
      });
      $.cometd.subscribe('/topic/ExternalAPINotifications', function(message) { //<-- that function(message) bit -- it starts our callback
                $('#apiMessages').append('<p>Notification: ' +
                    'Record name: ' + JSON.stringify(message.data.sobject.Name) +
                    '<br>' + 'ID: ' + JSON.stringify(message.data.sobject.Id) + 
                    '<br>' + 'Event type: ' + JSON.stringify(message.data.event.type)+
                    '<br>' + 'Created: ' + JSON.stringify(message.data.event.createdDate) + 
                    '</p>');    
                }); // <-- the } ends the call back, and the ); finishes the .subscribe method call.
    });
  })(jQuery);
</script>

Lets unpack that a bit. To start with, we’re invoking jQuery via $ to find the element with Id “apiMessages”. We’re asking jquery to append the following string to the apiMessages div for every record it receives. Thus, as records come in via the streaming api, a paragraph tag is added to the apiMessages div containing the text block “Record Name: name of record” <br> “Id: id of record” <br> … and so forth. It’s this append method that allows us to display the notifications that are streamed to the page.

Gotchas

At this point we have a functional streaming api implementation that will display every streaming record that matches our PushTopic. This can add a bunch of noise to the page as we probably only care about records related to the object we’re viewing. There are two ways to accomplish this kind of filtering. The first is to adjust our subscription. When we subscribe to the topic we can append a filter to our topic name like this:

$.cometd.subscribe('/topic/ExternalAPINotifications?Company=='Acme'', function(message) {...});

In this situation, only records matching the push topic criteria AND who’s company name is Acme would be streamed to our page. That said, you can filter on any field on the record. For more complex filtering, you can filter on the messages data itself. Because you’re writing the callback function you can always do nothing if you determine that the record you received isn’t one you wish to display.

Next steps, new ideas and other things you can do!

One thing we noticed after developing this is that we were left with a very large number of audit log records. In the future we may setup a “sweeper” to collect and condense the individual event audit logs into a singular audit log of a different record type when everything has gone smooth. We’ve also talked about include creating a Dashing Dashboard with live metrics from the fulfillment server. What ideas do you have? Leave a comment!