4 Backend Practices Ensuring Adenty’s Resilience Under High Loads
Performance issues may go unnoticed on small workloads, but reveal themselves as the workload increases. This hampers the user experience and causes lags across the entire tech ecosystem. What specific approaches can fortify a web performance in a high-load environment caused by heavy traffic and complex MarTech integrations?
In the article, we’ll uncover four peculiarities of Adenty database and code organization, enabling its stable performance under the high loads.
Manual model writing instead of automapping
While automapping simplifies development, it causes additional memory and computational overhead. This makes it suitable for small databases and prototypes but inefficient for large databases with complex schemas and high query loads. Reasons for AutoMapper’s underperformance include:
- Reflection Usage: Reflection can be 5–20 times slower than direct property access or assignment in manually written code. Even in the newest versions with optimized code structures such as Expression Trees, AutoMapper performs slower than direct code.
- Excessive Memory Load: AutoMapper can lead to the creation of intermediary objects and increase the load on the garbage collector. Moreover, incorrect configuration may result in data leaks and security risks.
- Postponed Execution and Hidden Costs: During ORM (Object-Relational Mapping), issues such as deferred LINQ query execution may arise. Another potential issue is the unexpected generation of queries, which can lead to performance bottlenecks and increased resource costs.
- Lack of Control Over Code Optimization: AutoMapper doesn’t provide enough opportunities for fine-tuning critical code areas or for effective performance profiling
As Adenty’s database is vast and growing, we’ve replaced automapping with manual model writing. This approach offers higher performance through precise and optimized database navigation, provides transparent control over operations, and enables safe database scaling.
Implementing manual model writing saved us 10-15% query time, ensuring instant load times for end-users. Moreover, it enhances code maintainability, which simplifies scaling and feature updates.
Combining Newtonsoft.Json and .NET System.Text.Json libraries
If we consider benchmarks comparing the efficiency of Newtonsoft.Json and .NET System.Text.Json libraries under high loads, the latter shows better performance. Why did we choose a hybrid approach then?
The key factor is the convenience of working with dynamic objects using JObject, JArray, and JToken. The .NET System.Text.Json library lacks this capability, even though it’s essential for handling flexible data structures with minimal overhead. By combining both libraries, we achieved faster loading times, as System.Text.Json enables quick serialization and deserialization of API responses, reducing memory usage and response time.
Audit of database indexes and stored procedures
Database indexes are data structures that speed up data retrieval operations. As a table of contents navigates a reader through a book, indexes provide a shortcut to the relevant database location, preventing the need to scan the entire database.
We regularly audit database indexes to optimize query performance and eliminate unnecessary overhead. Without regular checkups, redundant and unused indexes accumulate over time, increasing storage capacity and costs. Additionally, some queries may be poorly designed or missing, which slows down query processing. To avoid this, we regularly update, deduplicate, and remove the unnecessary indexes to maintain query performance at scale.
We employ a similar approach for stored procedures – precompiled SQL code snippets that execute directly on the database server, reducing network load. As the database schema, query logic, and workloads evolve, some stored procedures may become irrelevant. Reviewing them helps to optimize memory consumption, CPU usage, and input/output operations.
Caching to reduce API query load
Depending on the client’s selected plan, Adenty provides access to a certain set of features (e.g. server-side cookies, identity, etc). Without caching, the client API must query Adenty’s API for validation each time it accesses a feature. While this approach may perform well for smaller workloads, it significantly slows the execution down under high loads.
To prevent this, we cache clients’ plans, updating the cache every two hours to reflect plans changes and expirations. This approach reduces database strain, efficiently handles increasing query loads, and accelerates load times for a smoother user experience.
Summing Up
Maintaining performance under high loads has its own specifics. The approaches we employ can cause no improvements on smaller workloads, yet significantly increase performance on high and peak loads. We’ve achieved a 15–20% increase in database throughput and a 10–15% improvement in response time, ensuring instant performance regardless of workload and MarTech integration complexity.
Book an interactive demo to try Adenty’s capabilities in anonymity-resistant visitor tracking, resilient data storage, and MarTech integration.