Promises Redux.

When I first released the Promise library, there was a small bug. Apex wasn’t reseting the DML context between promise steps. To get around this, Promise fired off each promise step through an @future method. This worked around the problem, but it did have an impact on execution time. Salesforce resolved this issue in winter ’17. In repsonse, I spent some time refactoring the library. Today I’m happy to announce Promises 2.0; the AwesomeSauce edition. You can find the code, and installation instructions here:

The good (what changed?)

The library no longer executes promise steps via an @future method. This can have a dramatic, positive, effect on execution times. The first version required promises to accept and return JSON Serializable objects. This new version, no longer requires anything more specific than an Object. You can pass around and return sObjects, primitives, and custom Apex objects. The biggest refactor comes from naming conventions. I established two classes in the first version: PromiseBase and Promise. I’ve refactored away the need for PromiseBase. Now the library, apart from examples and tests, is a single class. Additionally, I’d received some feedback suggesting ‘promiseStep’ needed a better name. After some discussion with other developers, I settled on ‘Deferred’. Classes implementing the Deferred interface execute code via Queueable Apex. Because of this, the system defers their execution until it has resources.

The bad (Sorry, I made a few breaking changes)

The refactoring I mentioned above meant that the API has changed. V1 and V2 are not compatible with each other. Yet, the gains provided by the changes justify the need to refactor existing promise code. To migrate to version 2, you’ll need to do two things:

  1. Change the interface your promise classes use. From Promise.PromiseStep to Promise.deferred.
  2. Remove references to SerializableData. Either by passing specific object types, or by accepting and returning generic Objects

Below is a full example class that uses promises v2.0 – AwesomeSauce Edition

* Promise v2.0 – Kevin Poorman
* Thanks to Chuck Jonas!
* This class exists to demonstrate the usage of the Promise library.
Public Class Demo_PromiseUse {
// This execute method optionally accepts a string param that is used to pass data into
// the intial promise step.
Public Void execute(String param) {
new Promise(new Demo_PromiseStep())
.then(new Demo_PromiseStep_Two())
.error(new Demo_PromiseError())
.done(new Demo_PromiseDone())
} else {
new Promise(new Demo_PromiseStep())
.then(new Demo_PromiseStep_Two())
.error(new Demo_PromiseError())
.done(new Demo_PromiseDone())
// This method intetntionally creates a divide by zero error so we can test handling an exception
// note that there is no error handler defined here. The .Error() method is optional. Without it, the error
// is just logged.
// Note! in dev and sandbox orgs the Queuable Apex queue depth is 1! as such, you're only really testing the first
// promise step. The associated test for this method, needs to have the error occur in step 2, so thats the first
// step we list.
Public Void executeWithException() {
new Promise(new Demo_PromiseStep_Two(0))
.done(new Demo_PromiseDone())
// Like the previous method, this execution method is setup to cause a division by zero error in our
// Demo_PromiseStep_Two's resolve method. The constructor for that class accepts a divisor, in this
// case 0. However, this method includes an error handler. The test for this method, ensures that
// the exception handler is invoked.
Public Void executeWithExceptionWithHandler() {
new Promise(new Demo_PromiseStep_Two(0))
.error(new Demo_PromiseError())
.done(new Demo_PromiseDone())
// ____ __ _ ____ _
// | _ \ ___ / _| ___ _ __ _ __ ___ __| |/ ___| | __ _ ___ ___ ___ ___
// | | | |/ _ \ |_ / _ \ '__| '__/ _ \/ _` | | | |/ _` / __/ __|/ _ \ __|
// | |_| | __/ _| __/ | | | | __/ (_| | |___| | (_| \__ \__ \ __\__ \
// |____/ \___|_| \___|_| |_| \___|\__,_|\____|_|\__,_|___/___/\___|___/
Public Class Demo_PromiseStep implements Promise.Deferred {
Private Integer checkInteger; // helpful for testing. not generally needed.
// this is the required method for a PromiseStep class.
Public Object resolve(Object incomingObject) {
// Do some aysnchronous work, in this case, we'll pretend it's in
// our helper method:
checkInteger = exampleHelperMethod();
return checkInteger;
// helper methods
// I put this in a helper method not out of neccessity but because it illustrates that
// this is a normal class, and you can have multiple methods and architect this class
// in a way that code is easily testable, and isolated.
private Integer exampleHelperMethod() {
return Crypto.getRandomInteger();
Public Class Demo_PromiseStep_Two implements Promise.Deferred {
Private Integer dataPassedIn;
Private Integer slowAsyncWork;
Private Integer divisor;
Public Demo_PromiseStep_Two() {
// this constructor exists to facilitate testing. By accepting an integer
// i can later cause a division by zero error that is used to test error
// handling in the framework.
Public Demo_PromiseStep_Two(Integer divisor) {
this.divisor = divisor;
// This is the required interface method for a PromiseStep
Public Object resolve(Object incomingObject) {
// Do some aysnchronous work, in this case, we'll pretend it's in our helper method:
if (incomingObject != null) {
this.dataPassedIn = (Integer) incomingObject;
//intentionally setup to cause a divide by 0 error
if (this.divisor != null) {
Integer thrown = 1 / this.divisor;
slowAsyncWork = exampleHelperMethod();
return slowAsyncWork;
// helper methods
private Integer exampleHelperMethod() {
return Crypto.getRandomInteger();
// _ _ _ _ ____ _
// | | | | __ _ _ __ __| | | ___ _ __ / ___| | __ _ ___ ___ ___ ___
// | |_| |/ _` | '_ \ / _` | |/ _ \ '__| | | |/ _` / __/ __|/ _ \ __|
// | _ | (_| | | | | (_| | | __/ | | |___| | (_| \__ \__ \ __\__ \
// |_| |_|\__,_|_| |_|\__,_|_|\___|_| \____|_|\__,_|___/___/\___|___/
public class Demo_PromiseDone implements Promise.Done {
// This is used to demonstrate the use of a class instance var populated by a constructor
// Because this is an installable package i'm using an account.
Private Account internalAccount;
Private string completed;
// Constructors
public Demo_PromiseDone() {
} // No op constructor
public Demo_PromiseDone(Account incomingAccount) {
this.internalAccount = incomingAccount;
// This is the main method that the Promise.done interface requires.
// you could use this to persist a record, or to write a log.
Public Void done(Object incomingObject) {
// we could do nothing here – NOOP but we could also do something with the incomingObject
if (incomingObject != null) {
// do something here. Maybe save a record?
// this is a helper assignment to do testing of the library
completed = 'completed';
Public Class Demo_PromiseError implements Promise.Error {
private String errorMessage;
public Demo_PromiseError() {
// This is the main interface method that you must implement
// note that it does have a return type, and in this case I'm using the
// promise.serializableData type. This will pass the 'error occured' string to the done handler
public Object error(Exception e) {
//for now, just dump it to the logs
system.debug('Error Handler received the following exception ' + e.getmessage() + '\n\n' + e.getStackTraceString());
//Make the error available for testing.
this.errorMessage = e.getMessage();
//Alternatively, you could do any number of things with this exception like:
// 1. retry the promise chain. For instance, if an external service returns a temp error, retry
// 1a. Use the flow control object to cap the retry's
// 2. log the error to a UI friendly reporting object or audit log
// 3. Email the error report, and related objects to the affected users
// 4. post something to chatter.
return e;

view raw
hosted with ❤ by GitHub

Straight up now tell me, if you’re using this lib!

Since I released the library I’ve talked to many developers who are using it. They’ve discovered a few new use cases that I’d not thought of. For instance, one developer is using Promises in Sandbox startup scripts. This helps his company ensure the order of sandbox data creation. They have many address validations, callouts and integrations during the creation of accounts. Creating those accounts, and processing the integrations must finish first. Until then, creating dependent objects will fail. Another developer is using promises to automate a SAAS company’s billing. Harnessing promises, she was able to write a single chain of steps. Promises enables them to retry callout steps when an integration stops responding. Steps exist to create a case when the payment processor declines a card. When finished, the process sends the customer an receipt. Super cool. If you’re using Promise, or have an interesting use case for promises, drop me a line.


Installation instructions here:

Recently, I discovered a feature of Salesforce that I’d somehow missed — Named Credentials. When I ‘discovered’ them this week it was quite a lightbulb moment. They are conceptually simple, but that simplicity hides their power.

What is a Named Credential?

At an object level, you can think of them as a combination of Remote Site Settings and, well, credentials. These credentials can be as simple as username/password. But you can also use oAuth2. This facilitates creating Named Credentials based oAuth2 authentication between your org and external systems. It’s easy to overlook, so I want to draw your attention to this: Because Salesforce has an oAuth2 authentication option, Named Credentials enable authentication to a second Salesforce Org!
There are a few other features of the declarative side of Named Credentials. For instance, you can use custom HTTP Certificates. Strong Encryption for the win! Additionally, you can enable the use of merge fields in the http header and body. 

What do you do with a Named Credential?

Good question, glad you asked! What good is a declarative URL, and Credential object? It’s not as if we have declarative tools for making callouts. Or do we? As with most declaratively created objects, Named Credentials have a ‘name’ property. We can use that name property to dramatically simplify our Apex callout code. Specifying a Named Credential by it’s name when setting a http callout’s endpoint causes the callout to use the Named Credential’s URL and Authentication. That means we can go from this:
HttpRequest req = new HttpRequest();
String username = 'DoYouEvenSecurity?';
String password = 'DontHardcodeCredentials';
Blob headerValue = Blob.valueOf(username + ':' + password);
String authorizationHeader = 'BASIC ' + EncodingUtil.base64Encode(headerValue);
req.setHeader('Authorization', authorizationHeader);
Http http = new Http();
HTTPResponse res = http.send(req);
to this:
HttpRequest req = new HttpRequest();
Http http = new Http();
HTTPResponse res = http.send(req);
view raw gistfile1.txt hosted with ❤ by GitHub

Why should I use a Named Credential?

Wait, those are essentially identical! On the face of it, the code changes are minimal enough that you may be wondering why you should switch? There are at least two good reasons to start using Named Credentials. First, our code samples here are vague, and generic, to better illustrate whats going on without getting bogged down in details. That said, you know not to hardcode usernames and passwords. If you’ve got usernames and passwords hardcoded, you have bigger issues. Our sample code is simply missing all the security best practices. There is no querying for encrypted credentials. Most importantly, there’s no code to Authenticate with oAuth. Simply put, our example code ignores the complexity found in real production callouts. With all that in mind, the first reason you should use Named credentials is simple. It offloads the storage of credentials and authentication to a declaratively controlled process.
This leads us to the second reason. A simple change set deployment with a few components takes at least 20 minutes to prepare and upload. Add test and deployment time… well lets just say any hard coded credentials or URL’s become very tedious to change. Remember, credentials aren’t the only thing you may have to change. Unless you’ve made the URL queryable, you’ll likely have to deploy to change it. With Named Credentials, you can update without deploying. Furthermore, updating the Named Credential updates all callouts using it! As a bonus, if you’re using oAuth 2 with your named credential, and your oAuth provider returns a refresh token, that Named Credential will continue to work until the refresh token is revoked — even if you have changed the password.(more on this later)

Give me an example!

Here’s the scenario. You’re the Salesforce lead for a large enterprise named, Acme Corp. In a meeting earlier today, you learned Acme has purchased Beta Corp. Beta Corp’s Salesforce team will continue to run their org. Acme’s leadership, however, wants to surface some of Beta’s data in Acme’s org. Here’s where Named Credentials come into play. Using a Named credential allows you to write Apex code to make API calls into Beta’s Org. Here’s the follow-the-bouncing-ball steps to make this work.
  1. Create a connected app with oAuth, and set the default scope to ‘refresh_token full‘. As you become more familiar with how these work, you may want to adjust that default scope. You’ll need to specify a callback URL. For now, put in “” We’ll be editing this here in a bit.
  2. Create an Auth Provider. (Setup -> Security Controls -> Auth Providers). When prompted, choose ‘Salesforce’ as the provider type. Give it a name, etc. and populate the oAuth consumer key and secret from step 1. Leave the rest of the details blank — this will use Salesforce Defaults. Once you’ve clicked save, you’ll see at the bottom of the page a callback URL. Copy that URL.
  3. Go back to your connected app, and edit it. Paste your callback URL from step 2 into the Callback URL field.
  4. Create your Named Credential. For identity type, select Named Principle. Note: You may want to use per-user, but that’s an exercise for the reader. For the Authentication Protocol, select ‘oAuth 2.0’. After selecting oAuth as the authentication protocol, a new field will appear. Populate the authentication provider field with the Authentication Provider created in step 2. When you’re connecting two salesforce orgs, you need to make sure that your Named Credential URL uses Instance URL. I.E.:, or the full url your my-domain. I.E.:
  5. Write Apex code to use the named credential. (see above)
  6. … Profit?what it should look like when you're ready to authenticate

Where do I do this?

For our Acme corp. use case, all four steps above would happen in the Acme Corp. Let me repeat that. All four steps above would happen in the Acme Corp org. When you save the Named Credential (step 3 above) your taken to a standard Salesforce login screen. This is the only bit that involves the other org at all. You’ll need a login to Beta corp’s org here. Note: Your Beta corp login defines the data visibility of your Named Credential.

We bought Gamma Corp!

Congrats on your company’s success! Guess what! You don’t need to redo all these steps to pull in Gamma Corp’s data. All you’ll need to do is create a new Named Credential pointing at Gamma Corp’s org. You’re able to re-use the connected app and Auth Provider.

Maintaining Named Credentials

My company has strict password policies that force you to change them every so often. Acme corp has the same type of policy. Thankfully, Named Credentials using oAuth / Salesforce Auth Provider rely on refresh tokens. These tokens are valid until revoked. This means, even if your password changes, the integration will continue to work. (Until you revoke the refresh token). This makes Named Credentials using oAuth 2 largely maintenance free! Like I said, Use you some Named Credentials for great good!

A framework for using the promise pattern in Apex

Writing instructors tell you to answer the five W’s: who, what, where, when and why. Mine even instructed me to answer them in that particular order. But today we’re talking about asynchronous code execution and promises. That means it might be easier to write this post a bit out-of-order:


I’ve always understood the hardest thing in computer science to be naming things. Followed immediately by cache invalidation and asynchronous execution, of course. Writing, debugging and maintaining asynchronous (and/or multi-threaded) code is hard.

The joke, in fact, has always been ‘hard, asynchronous is, debug to.’

One way to think about this problem is ask ‘when should I execute this code?‘ The answer, of course, is simple: When I have all the data I need to execute it. Which leads itself to another question. ‘how do I know when I have all the data without stopping and waiting every time I have a long-running method?‘ There are at least two answers to that question. If you’ve ever written Javascript, you’ve likely hit upon the first answer: callbacks. Callbacks are functions passed as parameters into a long-running asynchronous method. The callback then executes at the very end of your method. Callbacks solve the ‘when’ problem by defining the ‘what’ before execution starts. Another solution to this is Promises.


Promises get their name because they represent the promise of an eventual value. A Promise is a state machine with three states: Pending, Resolved and Error. (These states may have different names depending on the implementation.)

An image describing the state machine of promises


A promise starts its life in Pending status. As the code completes, it either succeeds and becomes Resolved, or fails and ends up in Error status. Promises solve the ‘when’ problem by introducing a method, .then(). This attaches the passed concluding code to the initial method. Promises do this by adding to, and executing methods from a stack. Every invocation of .then() adds to the stack. Executing the promise causes the stack’s methods to run in order.

The state machine, combined with a stack make Promises easy to describe (in English). (I, the developer) promise the system that this method will return some data. When it does, ‘then’ do this other work I’ve described.

One of the stellar features of Promises is their ability to return not only data, but a new Promise. This allows you to chain promises together into processes. For example, a developer might need to pull some data from a third party API. However, before she can access the data, she has to authenticate. In practice means making a callout to authenticate and retrieve an oAuth token. Token in hand, a second call retrieves the data. With Promises this can described as a single process:

Login() // this method must return a promise!
// the promise login() returns the oAuth token and dependency injects it into the resolve method of our fetchData() class
new fetchData()

view raw
hosted with ❤ by GitHub

Why Promises? (another? promise library for Apex?)

All this talk about promises has been empty theory until recently. With the release of Winter ’15, Salesforce gave us QueueableApex. QueueablexApex is the interface that enables the implementation of Promises in Apex. Last year at Dreamforce I gave a talk on the Promise pattern and how to implement it in Apex. Unfortunately, none of the example code had that beautifully simple syntax using .then(). After my talk, Chuck Jonas, (Developer extraordinaire) wrote a promises implementation called Apex-Q. Chuck’s library is amazing, but I wanted something a bit simpler. Apex-Q includes two types of Promises. One that facilitates HTTP Callouts, and one that doesn’t. The trade-off needed to make all Promises HTTP callout safe, however, is pretty minimal. Namely, the data passed between promises must be JSON serializable. Today I’m happy to release Promise an alternative implementation for Promises in Apex.

How (do I use it)?

With the Promise framework, you’ll create classes that represent each ‘step’ of your promise chain. These classes simply implement the Promise.promiseStep interface and look like this:

Public Class Demo_PromiseStep implements Promise.PromiseStep{
Public Promise.SerializableData resolve(object incomingObject) {…}

view raw
hosted with ❤ by GitHub

The promiseStep interface requires only the resolve method. The resolve method must return a SerializableData object. Each resolve method accepts the output of the previous step. The output of the previous step is dependency injected into the next resolve method.

Once you’ve implemented each of your promiseSteps, you can execute a promise chain like this:

new Promise(new Demo_PromiseStep())
.then(new Demo_PromiseStep_Two())
// Add as many .then(new someClass()) calls as you need
.error(new Demo_PromiseError())
.done(new Demo_PromiseDone())

view raw
hosted with ❤ by GitHub

Note: while this example has two demo steps, if your process had, say 5 steps, you’d simply add 3 more .then(new someClass()) calls.

Under the covers, the .then() method populates a List of promises. The execute() framework method pops the first promise step off the list and executes it. If there are other steps on the list, it enqueues the next step by calling an @future annotated method. If the execution of a step throws an error, the framework invokes the error handler. The framework’s .error() method allows you to specify the error handling class. Finally, the .done() handler is always run! The done handler, is a great place for notifications and any other wrap-up work you need to do.


You can find Promise here: You can install it in your sandbox or production orgs using the deploy button at the bottom of the readme. Promise includes a test class. Additionally, Promise includes some example classes demonstrating use of the framework. The tests provide nearly 100% coverage. However, they rely on the included example classes. They are too functional for my taste. For now, please note that deleting the example classes from your org will cause the tests to fail.


If you’ve ever written code to make several sequential HTTP callouts… If you’ve wondered how to ensure one batch process completes before the next starts… Promise can help. Promise is for Apex developers who develop, debug and maintain asynchronous code. Promise is not only for doing integration work with external services. In fact, anytime you need to attach code execution to the completion of other code, Promise can help!

Or how Trailhead taught my Grandma about Salesforce:

salesforce_advantage_apps_newRecently I found myself in the position of trying to explain to my Grandma what Salesforce is. As it turns out, this is a really easy thing to answer, if you’re talking to someone who’s used computers all their life. For my grandma, however, there were just too many hurdles to overcome. To her, CRM was her job, not a tool she used for her job. As the front office manager for a beauty school in small town Indiana, she called, wrote and spoke with hundreds of clients a week. The idea of needing a computer to remember and record that information… well, she’d rather use her address book and her sharp-as-a-tack mind. It’s true, as they say, that they don’t make them like they used to.

She came around to the idea of CRM eventually. Then I had to explain why Salesforce, as a CRM, is better than her old school, (and flat out old) address/notebook. Way to put me on the spot grandma! Explaining metadata is actually a tall order. Especially since I smarted off that Metadata was just data about data! To which my grandmother said “isn’t data about data just data?”

Uh, yes. But… Trailhead to the rescue (Again)

Thankfully, Trailhead has come to my rescue. There’s a new trail that breaks down exactly what Salesforce is, what makes it special, and why it’s better than ye-olde-notebook. Called The Salesforce Advantage the trail walks you through not only what Salesforce is, but the technology behind it — including metadata and the cloud.

This isn’t a technological deep dive into the how of Salesforce, but it does discuss the what of Salesforce at a nice introductory pace. It’s perfect for my Grandma. More importantly, it’s perfect for managers and executives, new admins and developers to learn the strategic advantages of Salesforce. If you’ve never done Trailhead, or if you’re new to Salesforce — hell, if your Grandma asks what you do for a living … climb this trail! Especially if you’re trying to explain to Grandma!

Almost 20 years ago my father answered the phone as we ate a late family dinner. He was mightily confused because the man on the other end of the call was asking him questions he had no idea how t answer. After a few moments of confusion, my father told the man, he was pretty sure he wanted his son, Kevin, not him.

On the other end waited Ian Murdock. He’d taken the time to call me up to do my Debian maintainer identity interview. It was, and is, a big deal, from a security standpoint, to verify the identity of new Debian maintainers. But the task is tedious, calling up new maintainers and talking to them for half an hour. Imagine my surprise, when, the man doing my interview was *the* Ian from deb*ian*. One of the co-founders of the entire project had taken the time to call me. I was Impressed.


Years later, I got to meet Ian in person at an ExactTarget conference, and I thanked him for calling a nerdy high school kid to verify his identity. He not only confirmed my identity for security purposes, but he affirmed that I mattered, and could help. At the time, I don’t think he remembered the call, and I don’t think I was sufficiently able to convey what his call meant to me.

Last Monday, Ian Murdock was found dead in his home. The details are few, the speculation rampant. Police may or may not be involved. The proximate cause might have been Suicide. Was his Twitter account hacked? Regardless of the details, I’m reminded that all too often our culture judges people by their actions in the worst moments of their life. Those who have killed are forever murderers branded by their actions at the worst moments of their life. We don’t seem to have a cultural construct for good people who made mistakes. Not where suicide or the police are concerned. I’ve already started to see Ian eulogized not for his contributions to the world, but as a “crazy” and someone who gave up. I don’t know how Ian died, it’s likely you don’t either. It doesn’t matter; Ian was more, is more, than the unknown actions at the end of his life. He was also the kind of man who’d not only call and verify my identity, but reaffirm an insecure high school nerd’s ability to meaningfully contribute to the world at large.

To Ian, Thank you for all that you were and did.

Login History without Manage Users permissions

Recently someone asked me how to expose the login history without giving the individual users the manage users permission. The goal was to allow the internal support team to view the login history of portal users. Off the top of my head, I thought this would be the perfect use case for illustrating the power of the ‘without sharing’ Apex keywords. So after a few minutes we had built an Apex controller and Visualforce page we thought would work. Initial testing, as System Administrator users was promising. Loading the Visualforce page showed us the last 25 login history records for the User we specified. Unfortunately, loading that same page as a support user yielded 25 blank lines!

All was not lost, however, as we refactored the controller more in line with best practices. Moving our login history data to a wrapper class allowed the data to be visible to users who had permission to access the controller and page! Here’s the final product:

2015-09-30 at 10.06 PM



And here’s the code:

public without sharing class UserLoginHistory{
public class loginHistoryWrapper {
public DateTime LoginTime {get;set;}
public String Status {get;set;}
public String LoginURL {get;set;}
public String LoginType {get;set;}
public String Application {get;set;}
public String Browser {get;set;}
public ID UserId {get;set;}
public loginHistoryWrapper(DateTime lt, String statuss, String LR, String ltype, String app, String brow, ID uid){
LoginTime = lt;
Status = statuss;
LoginURL = LR;
LoginType = ltype;
Application = app;
Browser = brow;
UserId = uid;
private Map<String, String> UrlParameterMap;
private User pU {get; set;}
public User u {
if(pU == null){
pU = [SELECT name FROM User WHERE ID = :UrlParameterMap.get('userId')];
return pU;
public List<LoginHistoryWrapper> Records {get; set;}
public UserLoginHistory(){
UrlParameterMap = ApexPages.currentPage().getParameters();
List<LoginHistory> lRecords = [
SELECT LoginTime, Status, LoginURL, LoginType, Application, Browser, UserId
FROM LoginHistory
WHERE UserId= :UrlParameterMap.get('userId')
Records = new List<LoginHistoryWrapper>();
For(LoginHistory lh : lRecords){
Records.add(new LoginHistoryWrapper(lh.loginTime, lh.status, lh.loginURL, lh.LoginType, lh.Application, lh.Browser, lh.UserId));

view raw
hosted with ❤ by GitHub

The secret sauce here is our wrapper object. Converting the loginHistory objects to our wrapper object allows the data to be seen by users without the Manage Users Permission.

<apex:page controller="UserLoginHistory">
<apex:pageBlock title="Login History for {!u.Name}">
<apex:pageBlockTable value="{!Records}" var="Record">
<apex:column >
<apex:facet name="header">User's Name</apex:facet>
<apex:outputText value=" {!}"/>
<apex:column >
<apex:facet name="header">Login Time</apex:facet>
<apex:outputText value=" {!Record.LoginTime}"/>
<apex:column >
<apex:facet name="header">Status</apex:facet>
<apex:outputText value="{!Record.Status}"/>
<apex:column >
<apex:facet name="header">Login Type</apex:facet>
<apex:outputText value="{!Record.LoginType}"/>
<apex:column >
<apex:facet name="header">Client Type</apex:facet>
<apex:outputText value="{!Record.Application}"/>
<apex:column >
<apex:facet name="header">Browser</apex:facet>
<apex:outputText value="{!Record.Browser}"/>
<apex:column >
<apex:facet name="header">Login URL</apex:facet>
<apex:outputText value="{!Record.LoginURL}"/>

view raw
hosted with ❤ by GitHub

Viola! a safe way to expose login history with internal users without giving them ManageUsers permission!

The Problem

A few weeks ago my wife, a Salesforce admin, asked me if it was possible to create a related list that filtered out inactive contacts. I explained that it’s entirely possible to do something similar, but only with Apex and Visualforce. It got me to thinking that it’s possible to write code that, well, writes the code needed to build related lists with filters. A few weeks later, I’m releasing Custom Related Lists.

No coding required.

Custom Related Lists is an unmanaged package that enables Declarative Developers and Admins to create related list Visualforce pages. You can embed these Visualforce pages on standard and custom detail page layouts using the page layout editor. They mimic the functionality of native related lists with a few additions. With Custom Related Lists, users can select and create criteria to filter records displayed in the list. Now users can create a related list of Contacts on the Account detail page that filter out inactive contacts, or contacts of a given record type. Additionally, Custom Related Lists is not bound to the 10 field limit. Custom Related Lists is capable of displaying all the fields of a child object in the list, but users should be mindful of the user experience implications that carries with it.

How it works.

This is the technical bit, and if you’re just interested in using Custom Related Lists, you might want to skip this section. Fundamentally, Custom Related Lists (CRL) (ab)uses Visualforce as a template system to abstract out the boilerplate bits of Apex and Visualforce. It consists of three Visualforce template files:

  • CRL_MetaGenCtrl – the Controller template
  • CRL_MetaGenPage – Visualforce page template
  • CRL_MetaGenTests – Apex tests for the controller

Each of these files uses the CRL_MetaGeneratorCtrl controller, which is responsible for providing the non-boilerplate bits of code. The new Related_List__c record page has been overridden to give a nice Visualforce wizard. The user selects the “master” object whose detail pages this list will display, and the “detail” object whose records will be displayed on the list. Once selected, the user can select the fields to display, as well as defining criteria filters. The wizard is intentionally dynamic; selecting the “master” object automatically determines which objects relate to it and populates the detail selection list with related objects. Once the user has specified the details, the page controller first saves the record, and generates the controller, test, and Visualforce pages needed. Generated from the templates I mentioned above, using a little-known pageReference method – getContents(). getContents renders the given Visualforce page into a string as if it were requested by a browser. This allows us to use Visualforce as a templating system. Tied to a custom controller, the templates mentioned above are rendered in Apex through getContents(), and result in Apex code with the user-selected options merged in. For example, to be included on a detail page, the Visualforce page that displays the list must extend the standard controller of the selected object. The controller has a method to generate the startPageTag which is simply written to the final Visualforce page with standard merge syntax: {!startPageTag}

Putting it in place

After the custom controller extension, test, and Visualforce page are rendered to strings, the application puts the code in place. Apex code is placed with the Tooling API and the Visualforce page in place with a standard Rest call. (I have no idea, why Visualforce can be created with a rest call, but we need the Tooling API for Apex.) I want to give a shout out to Andrew Fawcett and James Loghry for their excellent Tooling API Apex library. (Please note, that the version of the library packages with Custom Related Lists is truncated to include only enough of it to insert Apex classes.) Because of how Custom Related Lists uses the Tooling API. It’s important to note that generating the code will work in Sandboxes and Developer orgs, but is not supported in Production orgs as these require full test runs for deployment. The tooling API requires a custom remote site setting to function. This is automatically generated using the Javascript metadata API wrapper JSForce whenever you load the app. If autogeneration of the remote site setting fails for some reason, users are given instructions on creating it.

Dynamicity and Data Portability

Custom Related Lists is designed to be run for creation in a Sandbox, and to use in a Production environment. However, it’s also designed to be highly dynamic. It would be a pain to re-generate the code and change set every time we wanted to adjust what fields are displayed. To prevent that need, the generated code references a Custom Related List object record on page load. This allows admins to change the criteria and fields displayed without having to re-generate the code and deploy it. However, this also means that users would have to re-create the record in the Production org. To prevent this need, the generated code contains a JSON encoded version of the initial Related_Lists__c record. After deployment to Production, on the first display of the related list, the code will de-serialize the JSON and create the needed record. It is highly recommend that you leave sharing settings for Related_List__c records as public read.

Using Custom Related Lists

Installing Custom Related lists is slighting more complex than a standard package as it must be installed in both your Production and Sandbox orgs. Once you’ve installed it in both orgs using these links…

Once installed, the process is straightforward. First, open the Custom Related Lists app from the app selection selection This will load the apps info page which will attempt to securely create a remote site setting on your behalf. If you’re an administrator of the org, this should succeed, and you’ll see a green success menu like this.remote site setting success


If it fails, follow the instructions to manually create the needed remote site setting. Once your remote site setting is settled, you can navigate to the Custom Related List tab. crl tabThis is a standard list view for your Custom Related Lists objects.

This is a standard list view for your Custom Related Lists objects. To facilitate ease of use, I’ve overwritten the New button to utilize a custom Visualforce page that looks like this:


Lets go through each of the input fields here and talk about what they do.

  1. Step 1: The name you provide here in Step 1 is used as the name of your Visualforce page that you’ll end up placing on a page layout. It’s good to be descriptive, but you only have 40 characters, so a good example would be something like “Active Contacts for this Account.” Short but descriptive.
  2. Step 2: This is where the fun starts. Whatever object you choose here determines everything else available to you. You want to choose the object on whose detail page layout you want to display this list. If you want to display, for instance, Active Contacts on this Account, you’d choose Account here. Custom Related Lists determines the options available to you in Step 3.
  3. Step 3: Choose the relationship you want to display. In the background, it asks the system to describe all objects that are related to your choice in Step 2, and shows you the Labels for those relationships. If you have a situation where you have multiple relationships, say a Master/Detail relationship and a Lookup relationship to the same object you must pay careful attention to choose the one you’re looking for. As an Administrator or Developer, you need to ensure you’re naming your relationships such that you can distinguish them! In our example of active contacts on this account, we’d choose Contact in Step 3.
  4. Step 4: Once you’ve selected the relationship to display, the app will load all of the relevant fields that are available for display. You can select fields on the left hand side and move them to the right, which will select them for display in your list. While standard related lists are limited to 10 fields, custom related lists are not.
  5. Step 5 is probably the most powerful and crucial step here. Unfortunately,itcan be the most confusing. Let’s start with some terminology.Criteria are the options you’re setting to filter which recordswill be displayed in your list. Constraintsare made up of a selected field, an ‘operand’ which defines the comparison method used, a Value field where you specify the valueto be compared, and afinalpicklist that lets youdetermine if this criteriashould be evaluated with AND or OR.If you’ve ever created a custom list view, thisshould be familiar. The wizard starts you off with a single criteria line, but you can add more by clicking on the Add New Constraint button. In our example of Active Contacts on Account you might select “Active” for the field name, set the operandpicklist to ‘=’ and set the value to ‘true’ (note, no quotesare needed). This would filter your list’s contents so that it only displays Contacts whose Active__c field is equal to true. Other operands available to you include ‘!=’ or not equal, as well as <, >, <= and >= or greater than, less than, less than or equal to, and greater or equal torespectively. Some example criteria to get your imagination going:
    1. RecordTypeId, =, your RecordTypeId here — would result in a related list only displaying related records of a selected record type.
    2. Email, !=, null -- Null is a special word, and in this case allows you to filter your list to exclude contacts with no email listed.
    3. CreatedDate, >=, 2005-06-15 — show only records thatwere created after June 15th, 2005.Note that you can use any of the APEX Date Literals for the value, and that you are responsible for providing the proper date format string! See here for more details!
    4. I mentioned before that you could create multiple criteria lines by clicking the Add New Constraint button. How those constraints are interpreted in relation to one another is dependent on your selection of AND or OR. For instance, you could use AND to ensure that both: RecordTypeID = your RecordTypeId, as well as lastModifiedDate >= LAST_90_DAYS criteria are met, ensuring that your list would only display records with the matching Record Type Id that were also modified in the last 90 days. If you select OR for those constraints, records with the selected Record Type Id OR who were modified in the last 90 days would be displayed.
    5. It is important to note here that you MUST have at least one constraint for Custom Related Lists to properly work. If you do not have criteria to add, use a standard related list.
  6. After clicking the Create New Custom Related List button and the page reloads, you’ll see the Generate Controller and Page button. Once you click the button, Custom Related Lists will generate the controller, the test class, and the Visualforce page.
  7. At this point, you’ve got everything you need, but only in your Sandbox org. There are three files you’ll need to include in your change set, and their filenames are listed on the final page. Deploy your change set, and then edit your page layout to include your Visualforce page and you’re set!

Important Considerations and Caveats.

This package makes several assumptions, and while they’re valid in the vast majority of cases, they’re not always going to work. While the package works hard to ensure that the fields listed for inclusion in the related list are queryable and available for use, not all of them actually are. For instance, say you’re creating a related list of notes to a given Account. The system will report that “Name” is available, but upon running it… not so much. Let’s call it a platform quirk. The good news is that most of the fields I’ve found that are affected by such quirks are not the kind of fields you’d typically want to put in such a list. The really good news, is that the related lists are fully dynamic. When the page loads, it pulls its configuration details from the Related_List__C object. If you run into an issue with a field not being available you can edit the fields on the object and reload. No code regeneration needed.

I hope y’all find this as useful, as I had a ton of fun building it. In the future I want to fix the edit screen to use the same wizard, and capture as many edge cases as possible. In the meantime, you can find the code on Github, and I’ll gladly accept pull requests and issues for feature requests. If Custom Related Lists makes it into your org, and saves you time and energy, feel free to hit that tip button on the left.


Trailhead is by far and away my favorite learning tool for Salesforce development. The combination of guided tutorials with hands-on, in-org development is unbeatable. On top of this, the guided tutorials have built-in direct feedback to help. Until recently, the state of the art for teaching programming languages was books. The best we had for checking our work was compiler and interpreter errors. I’ve waxed philosophical about Trailhead before, but it’s unparalleled as a teaching tool. Recently a Lightning Dev week participant asked me what my favorite Trailhead module was? After some consideration, I think my favorite module is…

Apex Testing.

trailhead_apexTestingI spend a lot of time on the developer forums and Salesforce Stack Exchange. I think it’s safe to say that the majority of questions are about testing. There’s the classic “will you write my tests for me.” The philosophical “why must I write unit tests?” But my favorite is still, how can I increase my test coverage for this code? Trailhead’s Apex Testing module, while it doesn’t cover everything, is a great start.

The module’s components.

The Apex Testing module has three components that build on each other. First, it starts with an overview of unit testing basics like assertions. This component tops off with a practical test — write a unit test for a given class. To pass the challenge, you have to meet 100% test coverage. The challenge of 100% test coverage reinforces several core ideas. Most importantly it enforces the practice of testing all the logical paths in your code.

Unit Testing Apex Triggers.

Because Triggers execute in response to DML events, unit tests for triggers have to contain assertions and DML statements. While orgs have to maintain 75% aggregate average code coverage, all triggers have to have some coverage. This has the practical effect of a many more testing questions related to Triggers. Learning to test triggers in Trailhead serves not just to train developers but to advance the community. It does this by decreasing the number of routine “how do I write a test for this trigger?” questions on the forums.

Creating Test Data.

Of all the components in the Trailhead Apex Testing module I find the last one most valuable. In this last component readers learn why it’s important to create your own test data. This is more than just a practical matter and the key to why this is a hidden gem. Testing your code is arguably more important than actually writing it. While a majority of us wouldn’t neglect objects or other code dependencies; Data dependencies are often overlooked. Learning to write proper tests means learning to write code that fulfills all dependencies. By fulfilling all the dependencies and writing proper tests, developers gain the confidence that their test is valid not only today, but next week and next release!

A good three base hit.

Trailhead’s Apex Testing covers three of the most important aspects of unit Testing. While Trailhead takes the time to teach the basics of unit testing I believe it misses two key facets. First, and most importantly, it doesn’t address the importance testing different users and scenarios. Specifically, I want Trailhead to teach developers to write tests that:

  1. Test the “expected” behavior — so-called “positive” test cases. These tests pass in expected input and test for expected output. One positive test case for each logical branch.
  2. Test expected Exceptions — so-called “Negative” test cases. These tests pass in invalid or otherwise faulty data into the unit of code. Negative tests assert that the code threw an exception. Bonus points for tests that assert a specific type of exception and it’s message.
  3. Test the code with various user roles and permissions. Can the code handle execution with a non-sysadmin?

Each of these test types safeguards against common exception cases. By testing more of these common cases, we gain more confidence in the robustness of our code. Perhaps Trailhead will expand to teach these three test types. (Dear Trailhead team, if you’d like help with that, tweet me!)

Secondly, I wish Trailhead also discussed HTTP callout tests. More and more of our work as Salesforce developers involves integrations. These integrations often take the form of api integrations through HTTP callouts. Testing callouts requires knowledge of the HttpCalloutMock interface and using the Test.startTest() and Test.stopTest() methods.

Regardless of these two sticky bits, I think this module is the best one out there. Testing is one of those key skills that every Salesforce developer has to master. Trailhead provides not only knowledge transfer but skill practice and evaluation — a combination that can’t be beat — especially for testing.

Salesforce Communities sometimes have to be Debugged.

Recently I had the opportunity to do some communities work. On the whole, I love the communities product. From an end-user perspective it can’t be beat. As a developer, but, I’ve hit a few snags debugging Visualforce pages in communities. During normal Visualforce development, you can rely on the platform to surface errors. Usually this takes the form of a nice ugly error message. Now ugly error messages are the epitome of terrible user experiences. So it’s understandable that Salesforce would prevent them from appearing to users.

Debugging Communities with uglyErrorMessage

Behold, an ugly error message in it’s uninformative native habitat!

Unfortunately, Salesforce replaces Visualforce errors with a different, but far less informative error message. This error displays only for Visualforce errors – not Apex errors. In fact, two conditions must be met for this kind of error. First, a page’s controller or controller extension(s) must instantiate without error. Secondly, there must be some kind of rendering error. What do I mean by “rendering error”? Well, in my case it was a view state size error. When these types of errors happen, Salesforce hides the error behind this lovely message.

When I first started experiencing this bug, I started with the usual debugging techniques. I tried a few different users. I tried adding users to debug logs. I made sure that logs were appearing in the dev console and read every line. I saw my controller firing up and completing without a single error. Yet the error page plagued me. After some discussion with the IRC #salesforce community I discovered a method on the PageReference class called getContent. getContent() returns the rendered content of the page, including error messages. Most importantly, getContent() returns errors rendered at the Visualforce level. This allows us to capture the error before Salesforce neatly hides our error. It’s possible to construct a page that attempts to render the content of any other Visualforce page in a try/catch block. When the page catches an error, it’s displayed.

Debugging Communities with a Visualforce wrapper

To help others with debugging communities based Visualforce errors, I’ve developed a reusable Visualforce page called CommunityDebugger, and its corresponding controller CommunityDebbugerCtrl. Here’s the controller code:

public with sharing class CommunityDebuggerCtrl {
public String failingPageResponse { get; set; }
String toLoad {get; private set;}
Map<String, String> params {get; private set;}
String queryString = '?';
public CommunityDebuggerCtrl() {
params = ApexPages.currentPage().getParameters();
toLoad = (String) params.get('page');
system.debug('params' + params);
boolean first = true;
for (String key : params.keyset()) {
queryString = queryString + key + '=' + params.get(key);
first = false;
queryString = queryString + '&' + key + '=' + params.get(key);
public void fetchFailingPage() {
try {
system.debug('Loading this url: ' + toLoad + queryString);
PageReference fail = new PageReference('/' + toLoad + queryString);
failingPageResponse = fail.getContent().toString();
} catch (Exception e) {
failingPageResponse = e.getTypeName() + ' : ' + e.getMessage() + ' : ' + e.getStackTraceString();

And the Visualforce page:

<apex:page controller="CommunityDebuggerCtrl" action="{!fetchFailingPage}" showHeader="false" sidebar="false" >
<apex:outputText id="failingPageResponse" escape="false" value="{!failingPageResponse}" />

To use this page to debug a community page:

  1. Deploy the controller and page to your sandbox or developer org. You shouldn’t deploy this to production.

  2. Visit your silently failing Community url, say

  3. Prepend “CommunityDebugger?page=” before the your failing page. Like this:

  4. When the page loads, your error will be rendered for you to see, like this:

System.VisualforceException : Id value is not valid for the User standard controller : Class.CommunityDebuggerCtrl.fetchFailingPage: line 23, column 1

And that, my fellow developers, is an error message you can act on. For the record, the bug that started this all? My view state was too big. Transient FTW.