Ah, Spring is in the air. So are arms, as people new to Salesforce throw them up during Trailhead challenges where they can’t seem to get the hands-on part to pass even though they see the result they expect.
The Trailhead modules and Superbadges are so well organized and written, it may seem like there is an instructor reviewing you submissions, but that would not be practical, profitable, or in the spirit of a cloud platform. The scoring is done by automated tests that are checking that things match exactly as the instructions provided.
The most common cause is that the learner has mis-typed a value provided, usually the API name (i.e., my_variable__c). Runner up to this is the experienced user who is new to Trailhead and uses their own naming conventions rather than following the instructions (been there, done that).
The third common cause is that the module content was updated but the test was not (doesn’t happen that often, but you can tell when there are a bunch of questions on the Trailhead Community about the same problem).
There are several user types, including API/integration, automated testing, and RPA accounts, that aren’t required to use MFA. We’re currently working on plans for how customers can exclude these types of users from future auto-enablement and enforcement milestones. We’ll update this FAQ and your products’ documentation when more information is available.
So, in conclusion just set it up with a Salesforce Platform User License, generate a token for that user, then use the APIs to login or login with username and Password+security token, depending on the application.
For large orgs, Lightning pages are like rabbits: they multiply quickly and most of them look the same unless you are really close. To keep the page population under control, use a unique Developer Name and provide a description of its properties when customizing record pages that will be assigned to anything other than Default.
As a cloud architecture consultant I have always applauded Salesforce for requiring 75% test coverage for deployments. I just wish that it was a minimum per class rather than an average per org. Why? Because things change and over time the average in production can come down to the point where adding a new change set that is at 75% when averaged with what is in production drops the total average. Because of this, I have set the standard for my team as 95%, which always got us through this issue until recently.
Another team had run a deployment that removed methods previous tests relied on. When my change set was validated, I was surprised to see the following error:
What was most vexing about this is that none of the classes in error were touched by my change set. Usually this can be fixed with a recompile, but not this time. What to do? Well, someone else had gotten a deployment through (there are multiple teams working in the org), so I knew there had to be some solution. And there was! Sparing you the dozen things I tried that did not work, here is the solution: