Jump to content

Search the Community

Showing results for tags 'optimization'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Welcome New Members !
    • General Discussion
    • Site News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Data Engineering, Data Science & AI
    • Development & Programming
    • CI/CD & GitOps
    • Docker, Containers, Microservices & Serverless
    • Infrastructure-as-Code
    • Kubernetes
    • Linux
    • Monitoring, Observability & Logging
    • Security
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure
    • Red Hat OpenShift

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Development Experience


Cloud Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 12 results

  1. Bandwidth estimation (BWE) and congestion control play an important role in delivering high-quality real-time communication (RTC) across Meta’s family of apps. We’ve adopted a machine learning (ML)-based approach that allows us to solve networking problems holistically across cross-layers such as BWE, network resiliency, and transport. We’re sharing our experiment results from this approach, some of the challenges we encountered during execution, and learnings for new adopters. Our existing bandwidth estimation (BWE) module at Meta is based on WebRTC’s Google Congestion Controller (GCC). We have made several improvements through parameter tuning, but this has resulted in a more complex system, as shown in Figure 1. Figure 1: BWE module’s system diagram for congestion control in RTC. One challenge with the tuned congestion control (CC)/BWE algorithm was that it had multiple parameters and actions that were dependent on network conditions. For example, there was a trade-off between quality and reliability; improving quality for high-bandwidth users often led to reliability regressions for low-bandwidth users, and vice versa, making it challenging to optimize the user experience for different network conditions. Additionally, we noticed some inefficiencies in regards to improving and maintaining the module with the complex BWE module: Due to the absence of realistic network conditions during our experimentation process, fine-tuning the parameters for user clients necessitated several attempts. Even after the rollout, it wasn’t clear if the optimized parameters were still applicable for the targeted network types. This resulted in complex code logics and branches for engineers to maintain. To solve these inefficiencies, we developed a machine learning (ML)-based, network-targeting approach that offers a cleaner alternative to hand-tuned rules. This approach also allows us to solve networking problems holistically across cross-layers such as BWE, network resiliency, and transport. Network characterization An ML model-based approach leverages time series data to improve the bandwidth estimation by using offline parameter tuning for characterized network types. For an RTC call to be completed, the endpoints must be connected to each other through network devices. The optimal configs that have been tuned offline are stored on the server and can be updated in real-time. During the call connection setup, these optimal configs are delivered to the client. During the call, media is transferred directly between the endpoints or through a relay server. Depending on the network signals collected during the call, an ML-based approach characterizes the network into different types and applies the optimal configs for the detected type. Figure 2 illustrates an example of an RTC call that’s optimized using the ML-based approach. Figure 2: An example RTC call configuration with optimized parameters delivered from the server and based on the current network type. Model learning and offline parameter tuning On a high level, network characterization consists of two main components, as shown in Figure 3. The first component is offline ML model learning using ML to categorize the network type (random packet loss versus bursty loss). The second component uses offline simulations to tune parameters optimally for the categorized network type. Figure 3: Offline ML-model learning and parameter tuning. For model learning, we leverage the time series data (network signals and non-personally identifiable information, see Figure 6, below) from production calls and simulations. Compared to the aggregate metrics logged after the call, time series captures the time-varying nature of the network and dynamics. We use FBLearner, our internal AI stack, for the training pipeline and deliver the PyTorch model files on demand to the clients at the start of the call. For offline tuning, we use simulations to run network profiles for the detected types and choose the optimal parameters for the modules based on improvements in technical metrics (such as quality, freeze, and so on.). Model architecture From our experience, we’ve found that it’s necessary to combine time series features with non-time series (i.e., derived metrics from the time window) for a highly accurate modeling. To handle both time series and non-time series data, we’ve designed a model architecture that can process input from both sources. The time series data will pass through a long short-term memory (LSTM) layer that will convert time series input into a one-dimensional vector representation, such as 16×1. The non-time series data or dense data will pass through a dense layer (i.e., a fully connected layer). Then the two vectors will be concatenated, to fully represent the network condition in the past, and passed through a fully connected layer again. The final output from the neural network model will be the predicted output of the target/task, as shown in Figure 4. Figure 4: Combined-model architecture with LSTM and Dense Layers Use case: Random packet loss classification Let’s consider the use case of categorizing packet loss as either random or congestion. The former loss is due to the network components, and the latter is due to the limits in queue length (which are delay dependent). Here is the ML task definition: Given the network conditions in the past N seconds (10), and that the network is currently incurring packet loss, the goal is to characterize the packet loss at the current timestamp as RANDOM or not. Figure 5 illustrates how we leverage the architecture to achieve that goal: Figure 5: Model architecture for a random packet loss classification task. Time series features We leverage the following time series features gathered from logs: Figure 6: Time series features used for model training. BWE optimization When the ML model detects random packet loss, we perform local optimization on the BWE module by: Increasing the tolerance to random packet loss in the loss-based BWE (holding the bitrate). Increasing the ramp-up speed, depending on the link capacity on high bandwidths. Increasing the network resiliency by sending additional forward-error correction packets to recover from packet loss. Network prediction The network characterization problem discussed in the previous sections focuses on classifying network types based on past information using time series data. For those simple classification tasks, we achieve this using the hand-tuned rules with some limitations. The real power of leveraging ML for networking, however, comes from using it for predicting future network conditions. We have applied ML for solving congestion-prediction problems for optimizing low-bandwidth users’ experience. Congestion prediction From our analysis of production data, we found that low-bandwidth users often incur congestion due to the behavior of the GCC module. By predicting this congestion, we can improve the reliability of such users’ behavior. Towards this, we addressed the following problem statement using round-trip time (RTT) and packet loss: Given the historical time-series data from production/simulation (“N” seconds), the goal is to predict packet loss due to congestion or the congestion itself in the next “N” seconds; that is, a spike in RTT followed by a packet loss or a further growth in RTT. Figure 7 shows an example from a simulation where the bandwidth alternates between 500 Kbps and 100 Kbps every 30 seconds. As we lower the bandwidth, the network incurs congestion and the ML model predictions fire the green spikes even before the delay spikes and packet loss occur. This early prediction of congestion is helpful in faster reactions and thus improves the user experience by preventing video freezes and connection drops. Figure 7: Simulated network scenario with alternating bandwidth for congestion prediction Generating training samples The main challenge in modeling is generating training samples for a variety of congestion situations. With simulations, it’s harder to capture different types of congestion that real user clients would encounter in production networks. As a result, we used actual production logs for labeling congestion samples, following the RTT-spikes criteria in the past and future windows according to the following assumptions: Absent past RTT spikes, packet losses in the past and future are independent. Absent past RTT spikes, we cannot predict future RTT spikes or fractional losses (i.e., flosses). We split the time window into past (4 seconds) and future (4 seconds) for labeling. Figure 8: Labeling criteria for congestion prediction Model performance Unlike network characterization, where ground truth is unavailable, we can obtain ground truth by examining the future time window after it has passed and then comparing it with the prediction made four seconds earlier. With this logging information gathered from real production clients, we compared the performance in offline training to online data from user clients: Figure 9: Offline versus online model performance comparison. Experiment results Here are some highlights from our deployment of various ML models to improve bandwidth estimation: Reliability wins for congestion prediction connection_drop_rate -0.326371 +/- 0.216084 last_minute_quality_regression_v1 -0.421602 +/- 0.206063 last_minute_quality_regression_v2 -0.371398 +/- 0.196064 bad_experience_percentage -0.230152 +/- 0.148308 transport_not_ready_pct -0.437294 +/- 0.400812 peer_video_freeze_percentage -0.749419 +/- 0.180661 peer_video_freeze_percentage_above_500ms -0.438967 +/- 0.212394 Quality and user engagement wins for random packet loss characterization in high bandwidth peer_video_freeze_percentage -0.379246 +/- 0.124718 peer_video_freeze_percentage_above_500ms -0.541780 +/- 0.141212 peer_neteq_plc_cng_perc -0.242295 +/- 0.137200 total_talk_time 0.154204 +/- 0.148788 Reliability and quality wins for cellular low bandwidth classification connection_drop_rate -0.195908 +/- 0.127956 last_minute_quality_regression_v1 -0.198618 +/- 0.124958 last_minute_quality_regression_v2 -0.188115 +/- 0.138033 peer_neteq_plc_cng_perc -0.359957 +/- 0.191557 peer_video_freeze_percentage -0.653212 +/- 0.142822 Reliability and quality wins for cellular high bandwidth classification avg_sender_video_encode_fps 0.152003 +/- 0.046807 avg_sender_video_qp -0.228167 +/- 0.041793 avg_video_quality_score 0.296694 +/- 0.043079 avg_video_sent_bitrate 0.430266 +/- 0.092045 Future plans for applying ML to RTC From our project execution and experimentation on production clients, we noticed that a ML-based approach is more efficient in targeting, end-to-end monitoring, and updating than traditional hand-tuned rules for networking. However, the efficiency of ML solutions largely depends on data quality and labeling (using simulations or production logs). By applying ML-based solutions to solving network prediction problems – congestion in particular – we fully leveraged the power of ML. In the future, we will be consolidating all the network characterization models into a single model using the multi-task approach to fix the inefficiency due to redundancy in model download, inference, and so on. We will be building a shared representation model for the time series to solve different tasks (e.g., bandwidth classification, packet loss classification, etc.) in network characterization. We will focus on building realistic production network scenarios for model training and validation. This will enable us to use ML to identify optimal network actions given the network conditions. We will persist in refining our learning-based methods to enhance network performance by considering existing network signals. The post Optimizing RTC bandwidth estimation with machine learning appeared first on Engineering at Meta. View the full article
  2. What is On Page Optimization On-page optimization refers to factors that have an effect on your website or webpage listing in natural search result. It includes a series of different processes that collaboratively optimize the website structure to help it be found by search engines in specific and searchers in general. Example of on-page optimization includes Meta keywords, Meta Description, Title tag, Keyword Optimization, Risk analysis, etc. On-Page Optimization is a crucial component of Search Engine Optimization (SEO) that focuses on optimizing elements on your website to improve its position in the search rankings. This includes content, HTML source code, and website architecture. By optimizing these elements, websites can improve their visibility, drive more traffic, and achieve higher engagement rates. 1. Meta keywords: Meta keyword are found in Meta data of the header. It tells the search engine that what the site is about. 2. Meta Description: This is the information that appears in the search result to describe the website. So it is very important to use it correctly. Google shows only 155 characters in the search result. 3. Title tag: It describes the title of the webpages. Google shows only 62 characters for the title tag. Sometimes search engines have difficult to choose that which keyword describes your website, so to overcome this it uses additional clues like HTML tag Of pages, Meta description, Meta title, etc. If two pages have duplicate title, Meta description then search engine considers it in bad form and also the pages can be penalized. So creating a unique title and Meta description protects your webpages from penalize. An additional Metadata is used to optimize the website, it is “robots.txt”. By using “robots.txt” we can allow or deny a search engine bot for ‘go’ or ‘not to go’ to a page. If you put a request to the robots.txt file, the bot comes and executes the request. 4. Keyword Optimization: One SEO principle that has changed little over the year is on-page keyword optimization. There are many different ways to do this optimization. Step 1:- First understand exactly what the user’s intent. Step 2:- Select primary keyword. Step 3:- Select supportive keyword for each primary keyword. Step 4:- Decide which keyword is to place. Use the Google adword keyword planner to cut down your keyword list. Step 5:- Writing original and perfect content. For optimizing great content there are some steps given below:- Have snackable headlines: – A snackable headline is simple, quick and easy to understand. Short sentences: – short sentences usually contain a main idea that a reader can quickly understand and remember easily. Keep paragraphs short: – It is found that short paragraph increases the user’s time on websites more than double. Shorter paragraph encourage the users to continue reading. Break the content: – Break the content into parts allows the user to quickly access the value in the content. So the user can easily understand the content. 5. Risk analysis: There are various websites present with same content so there is risk also increases for optimizing your website. For reducing the risk we must use content, keyword, description with the same words which user search because search engine shows the best option for user, so whatever user searched, search engine search for the best option for the user and shows the best result. Benefits of On-Page Optimization Improved Search Engine Rankings: Proper on-page SEO helps search engines understand your website and its content, which can significantly improve your site’s ranking for relevant queries. Increased Organic Traffic: By ranking higher in search engine results, your website can attract more organic traffic, which is often more targeted and engaged. Enhanced User Experience: On-page SEO involves improving the website’s usability and accessibility, leading to a better user experience. This includes faster load times, mobile optimization, and high-quality content. Higher Conversion Rates: An optimized website not only attracts more visitors but can also convert more of these visitors into customers or leads due to a better user experience and targeted content. Cost-Effectiveness: Unlike paid advertising, the traffic generated from on-page SEO is free, making it a cost-effective strategy in the long term. How to Do On-Page Optimization Keyword Research: Identify relevant keywords that your target audience is searching for. These keywords should be strategically incorporated into your content, titles, meta descriptions, and URLs. Optimize Title Tags: The title tag should be compelling and include the primary keyword towards the beginning. It should also accurately describe the page’s content. Meta Descriptions: Write concise and engaging meta descriptions that include target keywords. Meta descriptions should provide a brief overview of the page’s content. Header Tags: Use header tags (H1, H2, H3) to structure your content. The H1 tag should be used for the main title, with H2 and H3 tags for subsections. Include relevant keywords in these headers. Content Quality: Publish high-quality, original content that provides value to your audience. Incorporate keywords naturally and address user intent. Optimize Images: Use descriptive filenames and alt attributes for images. This helps search engines understand the images and can improve the page’s ranking for relevant queries. URL Structure: Create clean, descriptive URLs that include keywords. Avoid using long URLs with unnecessary parameters. Mobile Optimization: Ensure your website is mobile-friendly, as this is a significant ranking factor for search engines. Site Speed: Improve your website’s loading speed by optimizing images, using caching, and minimizing the use of scripts. Best Practices of On-Page Optimization Focus on User Experience: Always prioritize the user experience. This includes designing an intuitive navigation structure, using responsive design, and ensuring content is easy to read and engaging. Use Schema Markup: Implement schema markup to help search engines understand the context of your content. This can enhance your visibility in search results through rich snippets. Internal Linking: Use internal linking wisely to help search engines discover new pages and distribute page authority throughout your site. Regularly Update Content: Keep your content fresh and up to date. Regularly updating your site encourages search engine crawlers to index your pages more often. Avoid Over-Optimization: While keywords are important, overusing them can lead to penalties. Write naturally and for your audience first, with search engines in mind. Secure Your Site: Implement HTTPS to ensure a secure connection. Security is a ranking signal, and HTTPS websites are preferred by search engines. Implementing these strategies effectively requires a combination of technical SEO skills, content creation, and a deep understanding of your audience’s needs and search behavior. On-page optimization is an ongoing process, and staying updated with the latest SEO trends and algorithm updates is crucial for maintaining and improving search engine rankings. What is Off page optimization Off-page optimization is a search engine optimization (SEO) process that involves all processes external to the website that can affect its search engine reach and results. It is a series of different processes that are directly or indirectly performed on external websites with the intent to optimize it for search engines. In general, when a user types a search query, search engine algorithms look into their index and try to find the best pages that can satisfy the intent of the user. Off-site SEO is generally known as link building. Since a proper way to promote a website involves many more methods and techniques than building links. Off page optimization includes: Creating links on third-party websites that link back to the optimized website. Placing keywords/website name/webpage in anchor text of links created. Building links on reliable websites (that are established and recognized globally). Building links on related websites. Creating links on social media networks. Submitting the website to search engines and Web directories. 1. Blogging: Use blogging this year too in your SEO plans. Post your fresh content, new services on a regular basis so that users keep visiting your site. It helps you to build authority in your subject and backlinks as well. Blogger, Medium, Weebly, WordPress, Hubpages are few names where you do your blog submission. 2. PPT or PDF Submission: PPT or PDF submission can help you to reach your audiences easily. Create quality and helpful content in your subject, make a PPT or PDF and submit it to sites like slideshare, scribed, 4shared and various others. 3. Competitor Analysis: Research your competitor’s presence and backlinks by using tools like Ahrefs, Semrush, Majestic and SEO Spyglass and try to be active there and build your backlinks with them. 4. Backlinks: Backlinks always played a vital role in SEO Off-page techniques. But it should be done with proper research because search engines always keep an eye on backlinks.Backlink considered when other websites link or refer your site. Learn more on this with Linkflow 5. Social sharing: Social signals playing an important role in SEO strategy. Therefore use social networks appearance by sharing quality content on a regular basis. Benefits of Off-Page Optimization Off-page optimization refers to all the measures taken outside of the actual website to improve its position in search rankings. These efforts are primarily focused on building the site’s reputation and authority through links, social media, and other external means. The benefits of off-page optimization include: Increased Rankings: The website will rank higher in the SERPs (Search Engine Results Pages), which leads to more traffic. Increased PageRank: PageRank is a number score that represents the importance of a website in the eyes of Google. Off-page SEO helps increase this score. Establishing Authority: By securing backlinks from reputable sites in your industry, your site is seen as more authoritative and trustworthy. Greater Exposure: Higher rankings also mean greater exposure because when a website ranks in the top positions, it gets more links, more visits, and more social media mentions. Cost-Effectiveness: Off-page SEO is incredibly cost-effective, especially when compared to paid advertising channels. How to do Off-Page Optimization Link Building: The most common off-page SEO method is through the creation of backlinks (links from other websites to yours). The quality, relevance, and number of these backlinks influence your site’s ranking. Social Media Engagement: Being active on social media platforms can help you make your business popular and generate more backlinks. Guest Blogging: Writing articles for other relevant blogs in your industry can get your site’s link out there. Influencer Outreach: Reaching out to influencers in your industry for promoting your content can also result in external links. Content Marketing: Shareable content naturally acquires backlinks and is a critical part of off-page optimization. Forums and Community Participation: Being active in online communities related to your industry can help you gain backlinks and traffic. Local SEO: Ensuring your business is listed in local directories and Google My Business. Best Practices of Off-Page Optimization Prioritize Quality Over Quantity of Links: It’s better to have a few high-quality links than a multitude of low-quality ones. Google evaluates the authority and relevance of your link sources. Diversify Your Link Sources: Getting links from a wide range of sites is more beneficial than having multiple links from a single domain. Focus on Relevant Links: Links from sites within your industry or niche are more effective than links from unrelated sites. Monitor Your Backlink Profile: Use tools to monitor your backlinks to ensure they are high quality and to disavow any toxic links that could harm your ranking. Engage With Your Community: Engage genuinely on social media, forums, and in your local community to build relationships that can translate into off-page SEO benefits. Avoid Black Hat Techniques: Techniques such as buying links, link exchange schemes, and cloaking can result in penalties from search engines. Create Share-Worthy Content: Content that provides value is more likely to be shared and linked to, driving organic off-page SEO efforts. By adhering to these best practices, you can effectively execute an off-page optimization strategy that complements your on-page efforts, ultimately improving your website’s visibility and authority online. The post What is On-Page Optimization and Off-page Optimization appeared first on DevOpsSchool.com. View the full article
  3. Data management changes when migrating to the cloud, but there are strategies for technical leaders who want to direct more efficient DataOps. View the full article
  4. Predictive Optimization intelligently optimizes your Lakehouse table data layouts for peak performance and cost-efficiency - without you needing to lift a finger. View the full article
  5. AWS Step Functions announces an Optimized Integration for Amazon EMR Serverless , adding support for the Run a Job (.sync) integration pattern with 6 EMR Serverless API Actions (CreateApplication, StartApplication, StopApplication, DeleteApplication, StartJobRun, and CancelJobRun). View the full article
  6. Cloud-native application development in AWS often requires complex, layered architecture with synchronous and asynchronous interactions between multiple components, e.g., API Gateway, Microservices, Serverless Functions, and system of record integration. Performance engineering requires analysis of the performance and resiliency of each component level and the interactions between these. While there is guidance available at the technical implementation of components, e.g., for AWS API Gateway, Lambda functions, etc., it still mandates understanding and applying end-to-end best practices for achieving the required performance requirements at the overall component architecture level. This article attempts to provide some fine-grained mechanisms to improve the performance of a complex cloud-native architecture flow curated from the on-ground experience and lessons learned from real projects deployed on production. The mission-critical applications often have stringent nonfunctional requirements for concurrency in the form of transactions per second (henceforth called “tps”). A proven mechanism to validate the concurrency requirement is to conduct performance testing. View the full article
  7. Amazon OpenSearch Service now provides new Auto-Tune metrics and improved Auto-Tune events that give you better visibility into the cluster performance optimizations made by Auto-Tune. View the full article
  8. AWS Compute Optimizer now supports IOPS and throughput-based EBS volume recommendations. View the full article
  9. Taking a holistic approach to cloud optimization and application modernization can help keep cloud spend in check There’s a fine line between cloud spend and cloud sprawl. Most companies today are using cloud technologies to power their most important products and services, communications and collaboration, but it’s easy to cross the line from spending smartly […] The post How to Optimize Your Cloud Operations appeared first on DevOps.com. View the full article
  10. FreeRTOS version 202011.00 is now available with refactored IoT and AWS libraries: coreMQTT, coreJSON, corePKCS11, and AWS IoT Device Shadow, in addition to the FreeRTOS kernel and FreeRTOS+TCP library. These refactored libraries have been optimized for modularity and memory usage for constrained microcontrollers, and have undergone code quality checks (e.g. MISRA-C compliance, Coverity static analysis), and memory safety validation with the C Bounded Model Checker (CBMC) automated reasoning tool. For more details on these libraries and other features of this release, see the 202011.00 release blog on FreeRTOS.org. View the full article
  11. Today, AWS Marketplace announced the ability for independent software vendors (ISVs) to provide tags corresponding to the metered usage of their software. Customers can enable the vendor provided tags as cost allocation tags to gain visibility into third-party software spend. View the full article
  12. Vercel, during its online Next.JS Conf event, announced an update to its open source reactive programming framework that adds support for automatic image optimization along with access to continuous Web Vitals analytics. Company CEO Guillermo Rauch said Next.js 10 also adds support for internationalized routing and automatic language detection and quick-start e-commerce capabilities. Pioneered by […] The post Vercel Optimizes Apps Based on React Framework appeared first on DevOps.com. View the full article
  • Forum Statistics

    39.7k
    Total Topics
    39.9k
    Total Posts
×
×
  • Create New...