What’s wrong with death sir? What are we so mortally afraid of? Why can’t we treat death with a certain amount of humanity and dignity, and decency, and God forbid, maybe even humor. Death is not the enemy gentlemen. If we’re going to fight a disease, let’s fight one of the most terrible diseases of all, indifference.

~ Robin Williams, as Patch Adams, Patch Adams.

Last night I learned Robin Williams died. As of right now, everything indicates he took his own life. A colleague tweeted that, whenever he learns of a Celebrity’s death he admired he stops and asks “So, where do we go from here?” I won’t presume to speak for what Robin Williams would or would not have wanted his death to mean, but I think this is an excellent time to pause, and consider what Robin Williams chose to teach us about the world.

At the beginning of Patch Adams, Williams’ portrayal of a depressed man, turned physician begins with a few words, not from the historical Patch Adams, but from Dante’s epic tale of decent into hell:

In the middle of the journey of my life, I found myself in a dark wood, for I had lost the right path.

~Dante Alighieri, Inferno (Canto 1, 1-3)

The movie goes on to show us how Patch found the right path, though arguably not before treking though the underworld. Importantly, and perhaps most poignantly, Williams’ portrayal of Patch teaches us two key lessons.

  1. Though the right path is lost, it can be regained. This has always been hopeful news for me. As a friend of mine once told me, you have to have hope to get up in the morning. Hope, however fleeting, must not be forgotten. The right path can, and will be found. Some may find this ironic given the circumstances of Williams’ death. Williams’ may have taken his own life, but until that fatal decision was enacted there was always hope.
  2. Hope comes in many forms and in the weirdest of places. Humor, as Patch taught us, can be found even in the most hopless of situations. Asking the catatonic man whose arm is forever pointed up where Heaven is makes light of a condition many would find hopless, and in so doing lightens the mood, lifted the spirits and brought hope to the others in his group theapy session. Hope that their condition wasn’t nearly as easy to make fun of.

We don’t read and write poetry because it’s cute. We read and write poetry because we are members of the human race. And the human race is filled with passion. And medicine, law, business, engineering, these are noble pursuits and necessary to sustain life. But poetry, beauty, romance, love, these are what we stay alive for… That you are here – that life exists, and identity; that the powerful play goes on and you may contribute a verse. That the powerful play goes on and you may contribute a verse. What will your verse be?

Robin Williams, as John Keating, Dead Poets Society.

This morning, I heard a demagouge run their mouth on Williams’ apparent suicide, characiterizing it as a deeply selfish act to be condemned. I heard another person say he lost the fight to Depression. I find it hard to be charitable to either of these statements. Depression isn’t a battle to be won or lost, but a disease to be treated. A really shitty disease we’re all susceptible to. One we’ve all faced to some degree or another. Additionally, to call this a deeply selfish act is, in my opinion to wash ones hands from the responsibilities we have to our friends and family with this disease. Williams is often quoted as saying:

I used to think the worst thing in life was to end up all alone, it’s not. The worst thing in life is to end up with people that make you feel alone.

~Robin Williams

I am not saying that those around him made him feel alone. Far be it from me to presume such a thing. I am, however, saying that when we find friends and family struggling with Depression, we –unconciously– treat them in ways that often feel isolating, and judgemental. Ever told someone to “just cheer up?” Ever been told to “Just cheer up?” Intentions don’t match up with what’s heard. We mean well, but we end up marginalizing or deligitimizing their struggles, or worse, leaving them feeling like they’re not understood. Alone.

I’m writing this down, not just out of regret and loss for a man who has influenced my life in a mryiad of subtle ways, but also because Depression is one of those things where the casualties are more than just friends and family to suicide, but also our hearts and souls. No one wants to get the call that someone we love has committed suicide. No one wants to relentlessly interrogate every phrase and action of every interaction they had with that loved one.

If you’re reading this, there’s a strong chance you work in the high-tech industry. There’s a good chance you’ve known coworkers or friend with depression. There are simple things we can do to help. To show hope, to refuse their urge to isolate, and our urge to allow it. To walk with them through hell and back. I’m not therapist, and I don’t want anyone to confuse this advice as “professional advice” but here’s what I think we can do, for each other to help:

  1. Stop. We lead busy lives, often artifically busy lives. One of the most powerful things we can do for anyone is just stop, and spend time with them. Coffee. Dinner. A walk after lunch. Time well spent. As friends we have many responsibilities, but chief amongst them is always to provide truth and prospective to our friends.
  2. Listen. Listen to understand, but more importantly, to show understanding. This isn’t listening while driving, or listening while writing an email. I mean actively listening. Ask questions. Some struggle not make sense? Ask a clarifying question.
  3. Validate. This isn’t to say you should tell them they’re 100% right in feeling a given way about a given situation. What I mean here, is remind them that their struggles aren’t unique to them. Are they having relationship probles? “You know X, that was really shitty of Y”.
  4. Question. Help question assumptions. Here in lies the hope. So much of our lives is spent communicating; how much of that communication seeks to fix miscommunication? Often the assumptions we make about the world arround us are founded on miscommunication. Having friends who question those assumptions helps us find hope in what otherwise might seem a hopeless situation.
  5. Encourage them to seek professional help. Don’t stigmatize it, and don’t let others stigmatize it either. Never forget, that if you feel your friend is in danger, that the better part of vallor, the better part of humanity is to risk a friendship by reporting them to professionals, than to risk a friend.
  6. Write this number down on a card, and put it in your wallet for emergencies: National Suicide Prevention Hotline: 1-800-273-8255

All of life is a coming home. Salesmen, secretaries, coal miners, beekeepers, sword swallowers, all of us. All the restless hearts of the world, all trying to find a way home. It’s hard to describe what I felt like then. Picture yourself walking for days in the driving snow; you don’t even know you’re walking in circles. The heaviness of your legs in the drifts, your shouts disappearing into the wind. How small you can feel, and how far away home can be. Home. The dictionary defines it as both a place of origin and a goal or destination. And the storm? The storm was all in my mind. Or as the poet Dante put it: In the middle of the journey of my life, I found myself in a dark wood, for I had lost the right path. Eventually I would find the right path, but in the most unlikely place.

~ Robin Williams, as Patch Adams in Patch Adams.

Look. Hear’s the deal. If your company won’t send you to Dreamforce, it’s time to give serious thought to finding one who will. Dreamforce happens just once a year, and it’s four days packed full of information. More than sessions, mini-hacks and several hundred pounds of new books, Dreamforce is your chance to cross pollinate ideas with other devlopers and admins. The single greatest reason you need to attend Dreamforce isn’t to see Reid loose his voice in the iOT lab, but rather to see new and innovative ideas and solutions to problems. Problems you may be struggling with, problems you don’t yet even have — but will. Simply put, Dreamforce is the only event in the world where 100k people get together to cross polinate ideas. You and I won’t be the smartest, most experienced people at Dreamforce this year, but we’re not the least experienced people their either. We go to learn, and to teach equally. So if your company won’t send you to Dreamforce, find one that will, and make sure that if Dreamforce 14 isn’t in the cards, Dreamforce 15 is.

#Thats fine Kevin, but how do I do that?
The very best part of Salesforce is the rich community that’s grown up arround it. Better still, these are the people who know who’s hiring, who know whether or not Dreamforce is a regular thing. Find your usergroup, ask on the Dev community, get the UG leaders to have a “we’re hiring” sheet, or weekly post the “who’s hiring” to the success community for your UG. The community knows the power of Dreamforce, and they can help you find a company that will value you enough to send you. Because the truth is, if your company won’t send you to Dreamforce, they undervalue you, and the work you do. Ask a UG leader, look on the success community, find a better opportunity. Never have we Salesforce Admins and Developers been more in demand than now. We are the kingmakers of business, helping realize process, facilitate communication and increasing ROI. Dreamforce only hones those skills as we jostle, litterally, from one place to another learning and teaching each other.

It’s that time of year again. All the good developers and all the good admins eagerly awaiting the end of planned maintenance and the new gifts, er, features, that Salesforce is providing. At 340 pages, the Release notes are a great balance of detail-without-being-boring and I highly encourage everyone to read through them. If, however, you don’t happen to have an adorable screaming infant providing you with extra reading time between 2-4am have no fear; I written up a few highlights. I don’t want to let all the cats out of the bag, but suffice it to say, there’s The Good, The Bad and The Ugly. Without further ado:

The good.

  1. Our leader here, is innoculously described as “Speed Up Queries with the Query Plan Tool” (see Page 241-ff.) In essence, this is the Salesforce equivelent of MySql’s EXPLAIN, PostgreSQL’s EXPLAIN ANALYZE or Oracle’s EXPLAIN PLAN functionality. If you’ve never had the pleasure of arguing with a relational database query written by the intern… well, you may not know about explain. In general these tools all work the same way – prepend any given query with the keyword(s) EXPLAIN and the database will return information about how it will gather the information your looking for instead of the actual query results. Here’s why you need this: You and I both put our pants on one leg at a time, but I’ve writen queries against objects with more than 30 Million records, and I say all our SOQL queries should be reviewed with this explain tool. With this tool we can see which, if any indexes the query optimizer is able to utilize. Here’s how SOQL’s explain works:

[code lang=text]
{
"plans" : [ {
"cardinality" : 2843473,
"fields" : [ ],
"leadingOperationType" : "TableScan",
"relativeCost" : 1.7425881237364873,
"sobjectCardinality" : 25849751,
"sobjectType" : "Awesome_Sauce__c"
} ]
}
[/code]

As they say in the hood, “that there query sucks”. See that “LeadingOperationType” key in the JSON results? TableScan means it has to scan every record. ow. I should really refactor that query so that explain identifies fields it can index off of. With Sumemr’14 there’s a spiffy dev console button to access this information. Wicked.

Other good highlights include:

  1. The ability to override remote object methods
  2. Pricebook Entries in tests. Without “SeeAllData=true”, aka “DISASTERHERE=true”
  3. Un-restricted describes. If you build dyamic UI’s this is indespensible!

The Bad.

  1. There’s an aside on page 191 that bodes ill for many of us. If you’ve ever put Javascript in a home page component, start heeding their warning now. After Summer ’15 no more JS in homepage components. Convert to the new Visualforce component, or suffer the wrath of progress.

The Ugly.

Ok, I can’t really blame Salesforce for this, but the simple fact of the matter is that not all Salesforce devs are created equal. As a Salesforce consultant and developer I have inherited a number of orgs plagued with test classes that execute code, but make no assertions.

As a developer, I understand the importants of testing code, and believe that we should always write useful tests. Additionally, I know Salesforce runs the unit tests in our orgs before every release. Without assertions, however, these test runs tell us only that the code runs, not that it’s functioning properly. While there are rarely, if ever, technological solutions to social problems — like the lack of rigor and professionalism with regard to testing amongst Salesforce developer– I believe it is in the best interest of not only Salesforce Developers but also Salesforce itself, to build a feature allowing administrators to engage an org-wide flag requiring all test methods to call assert methods, with sane protections against such clear abuses as System.Asert(true);

This can only result in better testing, and therefore better code in production, as well as better feedback to Salesforce about the viablity of new API versions.

You should vote for this idea here:

https://success.salesforce.com/ideaView?id=08730000000l6zHAAQ

The problem at hand

Visualflow is one of the most powerful tools available to Salesforce Admins and Developers. Often the biggest barrier to adoption isn’t a technnowGoFindItical issue of capabilities but the lack of realization that visualflows can do that! Unfortunately, one of the technical issues that seems to come up often (at least recently) is how to create a record in a flow, and then upon successful completion of the flow, redirect the user to the new record. The use cases are pretty broad, but I was roped into the following use case. A flow is written to guide users through creating a case. When the case is created, and the flow is finished we want to redirect the users to the newly created case’s detail page. Sounds simple right?

Good Guy VisualFlow.

Unfortunately, the finishLocation=”” attribute of the Visualforce flow tag doesn’t accept flow variables. It’s therefore impossible, at this time, to create a flow with a programmatically defined finishLocation. What you can do, however, is generate a Visualforce controller that utilizes a getter function to programmatically generate the finishLocation attribute. Rather than creating these controllers one-off as you need them, I’ve created a reusable Visualforce Controller, that you can utilize with any flow you write, to redirect to any given RecordID.

Show Me The Code.

Note well You need to create a flow named “RedirectFlow” that consists of a decision step that launches the flow you actually want to kick off. Line 4 of the visualforce page is a parameter for defining which flow you actually want to start. This “wrapper flow” bit is needed to make the controller re-usable. Big thanks to SalesforceWizard  for pointing out the mistake I made. He’s the man.

A little background.

CometD LogoRecently I was working on a Salesforce app that interacts with a third party api. In our case, users utilize Salesforce to sell complex digital products served by a remote fulfillment platform. Unfortunately, the remote API wasn’t designed with Salesforce in mind. As a result simple-sounding business processes required multiple api calls. The sheer number of calls needed made direct callouts impractical. To overcome this we built a middleware application hosted on Heroku. We intentionally architected our middleware so a single Salesforce callout could trigger the process. In response to the callout, our middleware application uses the rest API to call back into Salesforce and gather all the needed data. Then it makes API calls as needed to push that data to the client’s proprietary fulfillment platform. To ensure the Salesforce user isn’t waiting for a page to load the middleware app works Asynchronously. Unfortunately, this also complicates success and failure messaging to the Salesforce user. This is where the Streaming API comes into play. Using the streaming API we can show realtime success and error notifications from our Middleware to the Salesforce user.

Enter the Streaming API.

If you’re not familiar with it, Salesforce introduced the streaming API a few releases ago and is one of the most powerful additions to the Salesforce platform. Here’s how it works: As a developer, you establish a “Push Topic”. PushTopics take the form of a PushTopic object record. PushTopic records have a few key fields; namely:

  • Query, which holds a string representation of a Soql query
  • notifyForOperationCreate, if true insert dml calls will trigger a push event
  • notifyForOperationUpdate, if true update dml calls will trigger a push event
  • notifyForOperationDelete, if true delete dml calls will trigger a push event
  • notifyForOperationUndelete, if true undelete dml calls will trigger a push event

These fields, are all boolean fields. If set to true, any corresponding DML statement who’s data matches your query will result in the API pushing that record. For instance, if you’ve saved your push topic record with:

notifyForFieldOperationCreate=true
query='SELECT ID, Name, MailingAddress FROM Account'

Putting it all together – The middleware changes

With our Api integration example we need to make a change to our middleware to enable notifications. Likewise, inside our Salesforce app, we’ll need to do two things:

  • Establish a push topic.
  • Edit our Visualforce page to subscribe to the push topic and display the notifications.

Lets start with the middlware modifications. Our middleware application returns final results to Salesforce by creating Audit_Log__c records. As originally designed, it’s setup to create an audit log only at the end of the process. If we want to see immediate results, however, we’ll need to extend our middleware to create multiple Audit_Log__c records — one per step in the process. The key this integration then, is to ensure our Audit_Log__c records trigger our push topic. In our case the solution is to create new Salesforce audit logs records logging the results for each step of the process. Each of these records logs the action taken, whether it succeeded, and what, if any, error messages were returned.

VisualForce changes

With our middleware setup to log individual events, we can turn our attention back to Salesforce. First we need to establish a PushTopic record. The easiest way to create a PushTopic is to use the Developer console. Open up the dev console and then click on the Debug menu and choose “Open Anonymous Apex” window. This anonymous apex window allows us to execute small bits of code without having to generate a full class. Copy and Paste this code sample to your Anonymous Apex window:

PushTopic pushTopic = new PushTopic();
pushTopic.Name = 'ExternalAPINotifications';
pushTopic.Query = 'SELECT Id, Name, Action__c FROM API_Audit_Log__c';
pushTopic.ApiVersion = 30.0;
pushTopic.NotifyForOperationCreate = true;
pushTopic.NotifyForOperationUpdate = false;
pushTopic.NotifyForOperationUndelete = false;
pushTopic.NotifyForOperationDelete = false;
pushTopic.NotifyForFields = 'Referenced';
insert pushTopic;

Click execute, and your anonymous apex window should disappear. If you see a Success message in the log window, move on!

Within our Visualforce page, we have a bit more work to do. Essentially, we need to incorporate a few Javascript libraries and display the results. To do this, we’ll need to:

  • create a Static resource bundle
  • load a few javascript files on our visualforce page
  • add some markup to display
  • write a javascript callback
  • add a filter

While Salesforce handles the work of streaming the data; to display it we’ll need to subscribe to our pushTopic. To subscribe we use the cometd javascript library. Cometd is a javascript implementation of the Bayeux protocol, which the streaming API uses. Using this library, along with jQuery and a helper library for JSON we can subscribe with a single line of code.

$.cometd.subscribe('/topic/ExternalAPINotifications', function(message) {...}

But lets not get ahead of ourselves. First, lets create a static resource. Static resources are created by uploading zip files to Salesforce. For more information on creating Static resources see this helpful document. I’ve created a helpful zipfile containing all the libraries you’ll need to use the Streaming api here: https://www.dropbox.com/s/4r6hwtr3xvpyp6z/StreamingApi.resource.zip Once you’ve uploaded that static resource, open up your Visualforce page, and add these lines at the top:

<!-- Streaming API Libraries -->
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/jquery-1.5.1.js')}"/>
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/cometd.js')}"/>
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/json2.js')}"/>
<apex:includeScript value="{!URLFOR($Resource.StreamingApi, '/cometd/jquery.cometd.js')}"/>

These lines tell Visualforce to include the javascript you need on your page.

The Final Countdown!

In order for the streaming API to add HTML segments to our page whenever the API fires a PushTopic, we’ll need to put a div on our page. Where is largely up to you, but I tend to try and keep my messaging at the top of the page. This is similar with how Salesforce does their own validation messaging etc. Wherever you decide to put it, put a div tag, and give it the id of “apiMessages” Something like this will do nicely:

<div id="apiMessages"></div> <!-- This Div is for use with the streaming Api. Removing this div hurts kittens. -->

Then at the bottom of your page’s markup, find the ending </apex:page> tag. Just above that tag, place a new script tag block like this:

<script type="text/javascript">
</script>

Inside this script block, we’re going to subscribe to our pushTopic and setup how our data looks when presented. To start, lets create a jQuery on document ready handler like this:

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      // Everything is Awesome Here. Here we can do stuff. Stuff that makes our bosses go "whoa!"
    });
  })(jQuery);
</script>

All this can look a bit intimidating but code inside this block will run when the browser signals that the document is ready. It’s in here that we want to initialize our Cometd connection to the Streaming API and do something with our data. The Cometd library we’re using is implemented as a callback system, so we need to write a callback function that outputs our data to the screen. But first, let’s hook up Cometd to the Streaming API.

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      $.cometd.init({ // <-- That line invokes the cometd library.
        // This next line snags the current logged in users' server instance: ie https://na5.salesforce.com and attaches the comet endpoint to it.
        url: window.location.protocol+'//'+window.location.hostname+'/cometd/24.0/',
        // Always vigilant with security, Salesforce makes us Authenticate our cometd usage. Here we set the oAuth token! Don't forget this step!
        requestHeaders: { Authorization: 'OAuth {!$Api.Session_ID}'}
      });
    });
  })(jQuery);
</script>

A couple of important notes here. The url and request headers are identical, regardless of org. Astute observers will note that we’re letting Visualforce substitute in actual API session credentials. This means that the Streaming API is following Salesforce security. If you can’t see the streamed object normally, you won’t be able to see it here.

Once we’ve setup the connection, we can establish the subscription. As before, it’s a simple one-liner addition to our code.

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      $.cometd.init({
        url: window.location.protocol+'//'+window.location.hostname+'/cometd/24.0/',
        requestHeaders: { Authorization: 'OAuth {!$Api.Session_ID}'}
      });
      // **** this is the crucial bit that changes per use case! ****
      $.cometd.subscribe('/topic/ExternalAPINotifications', function(message) {...});
    });
  })(jQuery);
</script>

The subscribe method accepts two parameters. The first is the text representation of the stream to subscribe to. It’s always to going to start with ‘/topic/’. The second is a callback function to be executed whenever data is received. In case you’re new to the Javascript or Asynchronous development community a Callback is a method executed whenever a given event occurs, or another method completes and calls it.

In our example above, we’re creating an anonymous function that accepts a single argument – message. message is a javascript object an id available to the body of our function. Within this function you can do anything that Javascript allows, from alert(); calls to appending objects to the Dom tree. Functionally, appending elements to the dom is the most practical so lets build that out. Remeber the div we created a few steps back? The one with the Id “apiMessages”? Lets put that to work.

<script type="text/javascript">
  (function($){
    $(document).ready(function() {
      $.cometd.init({
        url: window.location.protocol+'//'+window.location.hostname+'/cometd/24.0/',
        requestHeaders: { Authorization: 'OAuth {!$Api.Session_ID}'}
      });
      $.cometd.subscribe('/topic/ExternalAPINotifications', function(message) { //<-- that function(message) bit -- it starts our callback
                $('#apiMessages').append('<p>Notification: ' +
                    'Record name: ' + JSON.stringify(message.data.sobject.Name) +
                    '<br>' + 'ID: ' + JSON.stringify(message.data.sobject.Id) + 
                    '<br>' + 'Event type: ' + JSON.stringify(message.data.event.type)+
                    '<br>' + 'Created: ' + JSON.stringify(message.data.event.createdDate) + 
                    '</p>');    
                }); // <-- the } ends the call back, and the ); finishes the .subscribe method call.
    });
  })(jQuery);
</script>

Lets unpack that a bit. To start with, we’re invoking jQuery via $ to find the element with Id “apiMessages”. We’re asking jquery to append the following string to the apiMessages div for every record it receives. Thus, as records come in via the streaming api, a paragraph tag is added to the apiMessages div containing the text block “Record Name: name of record” <br> “Id: id of record” <br> … and so forth. It’s this append method that allows us to display the notifications that are streamed to the page.

Gotchas

At this point we have a functional streaming api implementation that will display every streaming record that matches our PushTopic. This can add a bunch of noise to the page as we probably only care about records related to the object we’re viewing. There are two ways to accomplish this kind of filtering. The first is to adjust our subscription. When we subscribe to the topic we can append a filter to our topic name like this:

$.cometd.subscribe('/topic/ExternalAPINotifications?Company=='Acme'', function(message) {...});

In this situation, only records matching the push topic criteria AND who’s company name is Acme would be streamed to our page. That said, you can filter on any field on the record. For more complex filtering, you can filter on the messages data itself. Because you’re writing the callback function you can always do nothing if you determine that the record you received isn’t one you wish to display.

Next steps, new ideas and other things you can do!

One thing we noticed after developing this is that we were left with a very large number of audit log records. In the future we may setup a “sweeper” to collect and condense the individual event audit logs into a singular audit log of a different record type when everything has gone smooth. We’ve also talked about include creating a Dashing Dashboard with live metrics from the fulfillment server. What ideas do you have? Leave a comment!

Charge it, point it, zoom it, press it,
Write it, cut it, paste it, save it,
Load it, check it, quick – rewrite it,
Plug it, play it, burn it, rip it,
Drag and drop it, zip – unzip it,
Lock it, fill it, call it, find it,
View it, code it, jam – unlock it — Daft Punk’s Technologic.

(Hair) Triggers.
If you were to ask your project manager, and a developer to define a trigger, you’d probably end up with three very different answers. Often, Triggers are a quick-fix for project mangers who know the declarative interface just won’t solve this one. Raise your hand if you’ve ever heard the phrase “just a quick trigger”? Sometimes. Sometimes, triggers are just that, a quick-fix. But if you ask a Developers, you might hear those Daft Punk lyrics chanted in monotone. “Write it, cut it, paste it, save it, Load it, check it, quick – rewrite it” Sooner, rather than later, Developers learn first hand the rabbit hole that triggers can be. After all, what kind of trigger is asked for? …is really needed? How will adding this trigger affect the other triggers already in place? How will existing workflow and validation rules play into the trigger? Will the trigger cause problems with future workflows?
Triggers are phenomenally powerful, but that phenomenal power comes with phenomenal (potential) complexity. Awhile back, Kevin O’Hara a Force.com MVP from LevelEleven (They make some fantastic sales gamification software for Salesforce over at: http://leveleleven.com/) posted a framework for writing triggers that I like to call Triggers.new

Triggers.new
Kevin O’hara’s framework is based on a big architectural assumptions — Namely that your trigger logic doesn’t actually belong in your trigger; instead, your trigger logic lives in a dedicated class that is invoked by your trigger. Regardless of your adoption of this framework, placing your trigger logic in a dedicated class provides valuable structure to triggers in general and makes longterm maintainability much simpler. With this assumption in mind, the framework actually changes very little about how you write actual trigger file. Here’s a generic definition of the trigger utilizing the framework.

Inside the logic class there are methods available to override from TriggerHandler that correspond to trigger execution states. i.e.: beforeInsert(). beforeUpdate(), beforeDelete(), afterInsert(), afterUpdate(), afterDelete(), and afterUndelete(). It’s inside these methods that your trigger logic actually resides. If, for Example, you wanted your ContactTrigger to apply some snark to your Contact’s Address your ContactTriggerLogic might look something like this:

So why do the extra work?
Not only does this framework help keep your code organized and clean, it also offers a couple of handy dandy, very nice(™) helpers along the way. As a trigger developer, you’ll sooner or later run into execution loops. An update fires your trigger, which updated related object B, which has trigger C which updates the original object … and we’re off. Kevin O’hara’s trigger framework has a built in trigger execution limit. Check it out:

That bit of code: setMaxLoopCount(1), means that the second invocation of a given method i.e.: afterUpdate() within the same execution context will throw an error. Much less code than dealing with, and checking the state of, a static variable. Say it with me now: Very nice!

Perhaps even more important than the max invocation count helper, is the builtin bypass API. The bypass api allows you to selectively deactivate triggers programmatically, within your trigger code. Say what? Yeah, it took me a second to wrap my head around it to. Imagine the scenario: you’ve got a trigger on object A, which updates object B. Object B has it’s own set of triggers, and one or more of those triggers may update object A. Traditionally, your option for dealing with this has been just what we did above, use a setMaxIterationCount(), or a static variable to stop the trigger from executing multiple times. But with the bypass api we have new option; any trigger that is built with this framework can be bypassed thusly:

What’s next?
I believe that trigger frameworks like this one provide quite a few benefits over free-form triggers both in terms of raw features but also in terms of code quality. Splitting the logic out of the trigger and into a dedicated class generally increases testability, readability and structure. But this framework is just starting. Imagine the possibilities! What if you could provide your Admin with a visualforce page to enable or disable trigger execution? Wouldn’t that make your admin giggle and offer you Starbucks? #starbucksDrivenDevelopment 

Here’s the low down on how to get around the “You have uncommitted changes pending please commit or rollback…” when trying to mix DML and HTTPCallouts in your test methods.

First, a little background and a health and safety warning. Sooner or later you’ll be faced with testing a method that both a: manipulates existing data, and b: calls out to a third party service for more information via HTTP.  Sadly, this is one of those situations where testing the solution is harder than the actual solution. In a testing situation, you should be inserting your data that your method is going to rely on. But this making a DML call — insert — will prevent any further http callouts from executing within that Apex context. Yuck. That means inserting say, an account, and then making a call out with some of that data … well that just won’t work. No Callouts after a DML call.

So lets cheat a bit. Apex gives us two tools that are helpful here. The first is the @future annotation. Using the @future annotation and methodology allows you to essentially switch apex contexts, at the cost of synchronous code execution. Because of the Apex context switch, governor limits and DML flags are reset. Our second tool is a two-fer of Test.startTest() and Test.stopTest(). (you are using Test.startTest() and Test.StopTest() right?) Among their many tricks is this gem: When you call Test.stopTest(); all @future methods are immediately executed. When combined together these two tricks give us a way to both insert new data as part of our test, then make callouts (which we’re mocking of course) to test, for example, that our callout code is properly generating payload information etc. Here’s an example:

//In a class far far away…
@future
global static void RunMockCalloutForTest(String accountId){
     TestRestClient trc = new TestRestClient();
     id aId;
     try {
          aId = (Id) accountId;
     } catch (Exception e) {
          throw new exception(‘Failed to cast given accountId into an actual id, Send me a valid id or else.’);
     }
     Account a = [select id, name, stuff, foo, bar from Account where id = :aId];

     //make your callout
     RestClientHTTPMocks fakeResponse = new RestClientHTTPMocks(200, ‘Success’, ‘Success’,  new Map<String,String>());
     System.AssertNotEquals(fakeResponse, null);
     Test.setMock(HttpCalloutMock.class, fakeResponse);
     System.AssertNotEquals(trc, null); //this is a lame assertion. I’m sure you can come up with something useful!
     String result = trc.get(‘http://www.google.com’);

}

//In your test…
@isTest
static void test_method_one() {

     //If you’re not using SmartFactory, you’re doing it way too hard. (and wrong)
     Account account = (Account)SmartFactory.createSObject(‘Account’);
     insert account;
     Test.startTest();
     MyFarawayClass.RunMockCalloutForTest(account.id);
     Test.StopTest();
}

This test works, because we can both a: switch to an asynchronous Apex context that’s not blocked from making HTTP Callouts, and b: force that asynchronous Apex context to execute at a given time with test.stopTest().

Today I released Mobile Admin Tools, a RubyMotion based Salesforce app allowing Admins to manage their Salesforce users on their iPhones. Mobile Admin Tools specifically allows Admins to:

  1. See a list of all users in their Org
  2. See a particular users’ login history
  3. See the detail of that users’ account such as mobile #
  4. Initiate a password reset (Salesforce will reset the password and email a new, temporary password to the users’ listed email)
  5. Deactivate or (re)Activate users
  6. Toggle various permissions such as Visualforce Developer mode

Additionally, the app gives admins the ability to tweet their experience after resetting a password with some suggested messages and hash tags like “Just reset another password with Mobile Admin Tools #atTheBar #whySFDCAdminsDrinkLess”

You can find the source code (it’s open source) and more details here: http://noeticpenguin.github.io/MobileAdminTools/

RubyMotion is a revolutionary new toolchain for native iOS development from HipByte. Using RubyMotion developers can now write iOS apps in Ruby, rather than Objective-C. RubyMotion statically compiles the ruby code to run on the Objective-C runtime. Because of this, RubyMotion apps have full access to all the public API’s, and Frameworks available to traditional Objective-C developers. This includes not only the basic UIKit framework for application development but also hardware specific bits like coreLocation.  Additionally, The wealth of open source control and libraries available through Cocoapods are fully available to RubyMotion developers.

SalesForce provides iOS mobile developers a rich SDK that handles Authentication (via Oauth2), query building and execution and json deserialization, amongst many other things. Developers eager to get started can find the SDK here: Salesforce Mobile SDK (iOS)

Ideally all third-party libraries, frameworks and sdk’s would be available as cocoa pods; however the Salesforce SDK consists of multiple individual libraries and is not available as a cocoapd (yet, see this issue)

Instead, the various pieces and parts of the SDK must be incorporated into your rubymotion project via the rakefile’s vendor and libs directives.
Here is an annotated rakefile detail the what, how and why of incorporating the Salesforce Mobile SDK(iOS) into your RubyMotion project. With this rakefile as a starting point and guide, you can create apps that interact with Salesforce with Rubymotion!

Some Important notes:

  1. This rakefile assumes you have placed your salesforce sdk in: «ProjectRoot»/vendor/Salesforce
  2. Note how some of the Salesforce SDK pieces are included via the app.libs « directive. These are precompiled, distribution ready .a files that Salesforce provides.
  3. However, not all of the Salesforce SDK pieces can be utilized by RubyMotion when incorporated via app.libs. Specifically, any piece of the SDK your application will be directly calling must be compiled by RubyMotion. RubyMotion exposes Obj-c methods by generating a .bridgeSupport file. These files are generated from the .h header files included with the source. These pieces are incorporated via the app.vendor_project directive.
  4. Please note that RestKit, SalesforceOAuth, and SalesforceSDK must all be included via app.vendor_project. When including other vendor projects always be sure to include the :headers => hash element so that the bridgesupport file is created.

A note on translating example Objective-C code to RubyMotion specific to the Salesforce SDK.

The example code for querying the Salesforce RestAPI looks something like this:

[[SFRestAPI sharedInstance] performSOQLQuery:”QueryString” failBlock:^{stuff} completeBlock:^{Other stuff}]

This Objective-C code is translated to RubyMotion thusly:

SFRestAPI.sharedInstance.performSOQLQuery(“query string”, failBlock: lambda {|e| stuff }, completeBlock:lambda{|c| otherStuff} )

Note that RubyMotion is expecting Lambdas as the blocks for failBlock and completeBlock.

Protip: The Ruby method named “method” (i kid you not) operates as a lambda, thus you can say: completeBlock: method(:some_method_name) and when the completeBlock is run, it will execute your method. Very handy.

This guide is intended as a cheat sheet for rapid reference, not comprehensive learning.

The following objects are modified, or new to orgs with the NPSP installed:

Contacts and Organizations:

  • NPSP establishes two models of auto-linking between a contact or an organization and the relevant account.
    •  A catch-all Account that is the default for all contacts without a defined account. (This is called the Bucket)
    • A slightly modified contact creation / editing screen to force a proper 1:1 link to an account.
  • You can switch back and forth between these models, but there is of course migration work.
  • A contact can be excerpted from Either model — Can be disassociated from any account — by marking it as private. 
  • Automatic primary Opportunity Contact roles – i.e.: if you give an Opp a valid contact Id, that contact id will generate a Opp Contact Role Automatically.
  • Automatic 1:1 Account on Opp when Contact Id is Supplied — i.e.: if you give an Opp a valid client id, the account id is automatically populated from the account on the contact.
  • Automatic Contact Role on Opp when 1:1 Account Id is supplied — i.e.: if there’s only one contact on an account when that account id is specified on the Opp, the Opp will auto populate the Contact Id

Households (New Custom Obj)

  • Households are essentially a collection of contacts living at the same physical address. Useful for say physical mailing control. i.e.: only send one flyer!
  • This just in, Households are also used for soft credits. Think of a soft credit as a way of sharing credit for a donation with your spouse or family. I give $100 to the local PuppyHelp Non-profit, in the name of Mr. and Ms. CodeFriar. The CodeFriar household is credited with that donation. 

Recurring Donations (New Custom Obj)

  • There are two key pick lists on this object:
  • Installment period
  • Schedule type

These two pick lists setup how the donations are created. (Note, Donations == Opp). The idea is when you create a recurring donation, the object will create N number of Opp’s with X spaced Closed dates. 

Relationships (New Custom Obj — but mostly a Related List)

  • While this is an entire obj, it is mostly exposed via a related list on contact, and a few additional fields.
  • Also includes reports. (show me all the children of single moms…)
  • Goal is pretty simple, establish familial or organizational relationships amongst contacts. (spouse, child, etc.)

Affiliations

  • This is essentially a bolt on for “nice to know” information. 
  • Used to associate a contact with an organization where the organization is NOT the non-profit. 
  • ie: if you want to establish an organization say, First United Methodist, but you the non-profit are UMC of NC. Then you would use an affiliation to associate John and Jane Doe with First UMC. It’s Nice to know they go to FUMC, but not critical.

Here’s the Entity Relationship Diagram:

Salesforce provides a nice (but nerdy) ERD for the NPSP. If ERD’s are your thing this will help:

image

Additional Resources: