Artisan vs Optimizable Targetings
Demand-Side Platforms (DSP) and AdExchange targetings can get really complex really fast, especially when your DSP or AdExchange starts implementing machine learning features like CTR optimization and CPC bidding strategies. But when you’re just starting, it’s very easy.
The Artisan Approach to Targeting
You add geo-targeting. Some domain targetings. Some app targetings. Device type (phone/tablet/CTV), OS, browser, OS version. Device brand and make.
But then it gets a bit more evolved. Custom targeting for battery level, connection type—what have you.
Also, creatives need targetings too. VPAID video creatives will need protocol support and supported media types checked, bitrates validated, length of the video. Even more involved for display creatives: Do we need MRAID support? Do we match on exact dimensions or will bigger ones work just as well for us?
The same thing applies to DSP-wide filtering of traffic—all those dozens of blocklists: domains, apps, CIDRs and individual IPs, user agents of crawlers, and publisher IDs.
And if you’re starting from scratch or inherited a well-established bidder, it’s very likely that each targeting is carefully coded from scratch—all the way from UI frontend, UI backend, config update, ingestion, and implementation in the bidder.
I call it an artisan approach—carefully handcrafted parts intricately woven together to deliver a luxury experience. This is one way to go. It works.
Targetings And Time
But with time, what tends to happen is that the number of campaigns grows fast. At first, it’s hundreds, and after just a few short years, it’s already thousands. The number of targetings grows. They start to take more time in the flame graph or CPU time top in pprof. And you notice that the structure of targetings is quite stable, but with time and new SSPs, targetings start to contain an enormous amount of business logic about variations between SSPs.
Usually, they are evaluated one campaign at a time, going through all of the targetings, all the way down. And the main optimization is to put on top of the funnel the “cheap” ones or the ones that would filter out most of the campaigns (think device type, or is it video, or is it in-app/web, mobile/desktop, etc.).
Patterns in Targetings
So what is an alternative?
Well, it’s easy to start noticing that targetings have a very regular structure that really stems from the fact that we match a value at different paths in an OpenRTB object to the campaign requirements.
Most targetings are about checking inclusion of a particular field in a set. Think domain allowlist of a campaign or a set of device types (phones, tablets) that the campaign is targeting.
Sometimes it’s a “negative targeting,” which is about checking if a field does not match.
And most fields are ints, strings, or lists of strings and ints. Rarely are these boolean types or floats.
I’m pretty sure you can see where I’m going with this.
Optimizable Approach to Targeting
Yes. Autogenerate all the targetings. They are going to be simple. But it’s possible to get them for literally every field.
There are nuances, as usual. Geo is usually about a hierarchy. There’s London, Ontario, and London, UK. These are two very different Londons. There’s Vancouver, BC, and Vancouver, WA. These things are important.
And if your DSP implements geofencing/supergeo targeting that is based on user coordinates, that’ll be inevitably complex.
So while the autogenerated targetings work fine, you will have to accommodate custom ones. Which would be wise, given that pacing, budget capping, and predictive targetings will have to be a part of a campaign’s filtering funnel.
Simplifying the Filtering Funnel
Usually, the filtering funnel is very simple, just a logical AND between conditions:
targeting1 AND targeting2 AND targeting3...
That’s simple enough. Sometimes you can throw in an OR.
And strictly speaking, if you go for autogenerated targetings with well-defined interfaces, implementing arbitrary logical expressions becomes possible.
In practice, a simple chain of targetings with “AND” works just fine. And also, arbitrary formulas are hard to represent in the UI—or rather, hard to have a good UX with these formulas.
Exploring Logical Expressions in Targetings
On the other hand, that opens interesting possibilities:
For the geo matching above, we can actually represent targeting as:
(device.city == 'London' AND device.region == "Ontario" AND device.country == "Canada") OR (...)
And of course, the representation of targetings doesn’t need to look like that—at all.
{
"targeting": {
"Or": {
"conditions": [
{
"And": {
"conditions": [
{
"StringListMatch": {
"path": ".device.geo.city",
"match": [
"London"
]
}
},
{
"StringListMatch": {
"path": ".device.geo.region",
"match": [
"Ontario"
]
}
},
{
"StringListMatch": {
"path": ".device.geo.country",
"match": [
"Canada"
]
}
}
]
}
}
]
}
}
}
But this particular approach with the arbitrary targeting formulas can be unnecessarily complicated and lead to optimizing trees of logical expressions. I think it might be optimized in simpler ways if we stick to a classical targeting filtering funnel with a single AND between a long list of matching predicates.
{
"targeting": {
"And": {
"conditions": [
{
"IntListMatch": {
"path": ".device.devicetype",
"match": [
1,
4,
5
]
}
},
{
"IntListMatch": {
"path": ".at",
"match": [
2
]
}
},
{
"StringListMatch": {
"path": ".device.os",
"match": [
"iOS"
]
}
}
]
}
}
}
Sorry for the verbose listing, but you can see that this can be very easily deserialized from JSON and evaluated as fast as “artisan” targetings (well, if we don’t go fancy into arbitrary logical formulas).
Optimizing Targetings
I call this approach “optimizable” rather than autogenerated because this kind of regular structure lends itself naturally to optimization.
As an example, in a logical AND at runtime, we can, with some sampling, measure how long each of the condition evaluations takes and how frequently they reject the request.
And we can order conditions in the logical AND expression by the expected rejection cost: P(reject) * CPU time
, to get the cheapest rejects to the top of a filtering funnel.
Individual targetings can be optimized as well in an automated manner, especially those with low cardinality. And geo might deserve special treatment, after all.
And this is what I want to talk about next time.