Jump to content

Search the Community

Showing results for tags 'testing'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Copado's genAI tool automates testing in Salesforce software-as-a-service (SaaS) application environments. View the full article
  2. Tricentis this week added a generative artificial intelligence (AI) capability, dubbed Tricentis Copilot, to its application testing automation platform to reduce the amount of code that DevOps teams need to manually create. Based on an instance of the large language model (LLM) created by Open AI that has been deployed on the Microsoft Azure cloud, […] View the full article
  3. Web application security testing aims to detect, prevent, and address security vulnerabilities within web applications. Flaws in web application coding accounted for 72% of the identified vulnerabilities. This evaluation involves scrutinizing the code, architecture, and deployment environment to assess the security posture of the applications. Security testing for web application can be executed manually or […] The post What is Web Application Security Testing? appeared first on Kratikal Blogs. The post What is Web Application Security Testing? appeared first on Security Boulevard. View the full article
  4. This blog series was written jointly with Amine Besson, Principal Cyber Engineer, Behemoth CyberDefence and one more anonymous collaborator. This blog involved one more anonymous contributor. Testing the pens... In this blog (#8 in the series), we will take a fairly shallow look at testing in detection engineering (a deep look probably will require a book). Detection Engineering is Painful — and It Shouldn’t Be (Part 1) Detection Engineering and SOC Scalability Challenges (Part 2) Build for Detection Engineering, and Alerting Will Improve (Part 3) Focus Threat Intel Capabilities at Detection Engineering (Part 4) Frameworks for DE-Friendly CTI (Part 5) Cooking Intelligent Detections from Threat Intelligence (Part 6) Blueprint for Threat Intel to Detection Flow (Part 7) If we do detection as code, and we engineer detections, where is the QA? Where is the testing? Depending who you ask, you may receive a very different opinion of what detection testing is. Three key buckets of approaches stand out: Unit Testing: Boring, Eh? This approach is where we do a simple, regular stimulus (command line execution, configuration change, some network activity, etc…) to check that detections still trigger. This is typically more oriented toward ensuring that the log ingestion pipeline, alerts, SIEM data model etc. are all integrated as expected (a secret: often they are not!). This basic approach reduces the chance of a bad surprise down the line, but does not make sure that all your detection is effective or “good.” This “micro testing” simply confirms that the detection rule follows the developer intent (e.g. this should trigger on Windows event 7946 with these parameters and it does do that in our environment today). Today, you say? What about tomorrow or the day after? Indeed, this type of testing benefits the most from repetition and automation. If you run a SIEM or even EDR, you know your “friends” in IT would often change things and — oh horror! — not tell you. So, if you commit to the idea of unit testing detections, you commit to that being continuous. Another dimension here: reality or simulation? Do you trigger the activity (very obviously impossible or insanely risky in some cases) or do you inject the logs into your SIEM? The answer is of course “yes.” Don’t assign an Okta admin role to a random user to see if your SIEM gets it….duh. The choices really are: Trigger the activity in production (if safe) Trigger the activity in a test environment that’s shipping logs to your SIEM for ingestion Inject logs collected after such activity into a SIEM intake (naturally, they won’t test the log ingestion from into your SIEM from production systems, only what happens after) Naturally, you need a workable way to separate test detections and test detection data from the real ones, especially for longer term analytics and reporting (this can easily be filed under “easier said than done” in some organizations, but at least don’t open tickets based on test alerts, will ya?). For example, this tool on GitHub provides the capability to test a YARA-L (used by Chronicle SIEM) rule by running it over logs that have been ingested and returning any matches (successful detections) for the detection engineer to analyze (see this presentation and this blog for additional details). Adversary Emulation The emulation is where intel is being parsed from the red teamer perspective to execute a similar attack on a test environment, and allows the DE to correctly understand detection opportunities. This process empowers detection engineers (DEs) to grasp detection possibilities at a deeper level, generating valuable datasets for the construction of highly tailored, environment-specific detections. It allows us to build a data set which can then be further analyzed to build detections in the most adequate manner for the environment. This is a Threat Oriented strategy which allows to build detections with more certainty. The focus shifts from asking “will this isolated string in a packet trigger an alert?” to the more comprehensive question, “will this adversarial activity, or variations of it, be flagged within my unique network or system?” For example, adversary emulation can involve mimicking the tactics, techniques, and procedures (TTPs) of a notorious ransomware group. This might include emulating initial access methods, lateral movement, data exfiltration techniques, and the final encryption stage. Similarly, an adversary emulation exercise might replicate the steps a state-sponsored threat actor employs to infiltrate a software vendor, taint a product update, and then pivot from the compromised update to target the vendor’s customers. What shows up in logs? What rules can be written? Breach and Attack Simulation (BAS) tooling provides some value on this [or MSV that works well, yet refuses to admit that it is a BAS :)] Red to Purple Teaming Finally, purple teaming (OK, one definition of such, at least) where a red team periodically executes attack scenarios, and monitor the outcomes in collaboration with SOC (including whatever relevant D&R and DE teams) what went well (i.e. triggered detections) and what didn’t (lack of detection, too many alerts, or detections didn’t trigger as expected). This Result Oriented approach is the most effective as checking if the SOC function as a whole is efficient, providing clear pointers for improvement to DE teams, if the attack scenario is meaningful. It sounds like a bit of a letdown, but the detailed analysis of this goes far outside of the scope of our little detection engineering series. All three types of threat detection testing are important for ensuring that a detection system is effective and comprehensive, and also “future-proof” to an extent possible. By regularly testing the detection system, organizations can identify and fix any weaknesses, and ensure that they are protected from the latest threats. Testing to Retesting Testing is great, but things change. If what you log changes, your detections may stop working or start working in a set of new, unusual ways :-). We do cover this in our unit testing section, but it — with changes — applies to all testing. This is IT, so everything decays, drifts, rots and gets unmoored from its binding. Rules that were robust start to produce ‘false alarms’ for ‘mysterious reasons’, things that triggered 1/month start triggering 3/minute… Thus, a reminder: all testing needs to be continuous (well, periodic, really) and if you plan to test, you need to retest. Scope This probably should be first, but hey.. I am getting to this, ain’t I? What detections to test? Some assume that testing is part of detection engineering, but no! Testing is part of detection, period. You do want to test vendor detections, customized/tuned directions and of course those you wrote before your morning coffee. I suspect there may be exceptions to this (Will some vendor offer guaranteed-to-work, curated detections? Probably, yes!) DE team should handle “Unit Testing” as part of their lifecycle, and at least plan it in their roadmap. Next, “Adversary Emulation” approach allows DE to better craft detections — requiring strong intel, and a tight collaboration between the red team (or equivalent), Detection Engineering and likely Threat Intel team, if any. Should capacity be available, it is strongly recommended to structure processes to support this function, initiated, lead and guardrail by the TI analyst who produced the knowledge item. At any rate, any detection testing data is useful for TI purposes: it allows to clearly report on the Detective posture of the organization in face of threats, not only in the wild but also of brand new ones. Detection Posture measurement is key to improving maturity in a threat-intel driven manner, as prioritization becomes more fluid and meaningful. Now that we have a stronger understanding of plugging TI processes into DE, one last piece of the puzzle is missing: from a strategy standpoint, how to start steering the SOC into a new threat-driven, detection content focused unit? How to start building project management processes? This is coming next! Previous blog posts of this series: Detection Engineering is Painful — and It Shouldn’t Be (Part 1) Detection Engineering and SOC Scalability Challenges (Part 2) Build for Detection Engineering, and Alerting Will Improve (Part 3) Focus Threat Intel Capabilities at Detection Engineering (Part 4) Frameworks for DE-Friendly CTI (Part 5) Cooking Intelligent Detections from Threat Intelligence (Part 6) Blueprint for Threat Intel to Detection Flow (Part 7) Testing in Detection Engineering (Part 8) was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story. The post Testing in Detection Engineering (Part 8) appeared first on Security Boulevard. View the full article
  5. Graphic created by Kevon Mayers Introduction Organizations often use Terraform Modules to orchestrate complex resource provisioning and provide a simple interface for developers to enter the required parameters to deploy the desired infrastructure. Modules enable code reuse and provide a method for organizations to standardize deployment of common workloads such as a three-tier web application, a cloud networking environment, or a data analytics pipeline. When building Terraform modules, it is common for the module author to start with manual testing. Manual testing is performed using commands such as terraform validate for syntax validation, terraform plan to preview the execution plan, and terraform apply followed by manual inspection of resource configuration in the AWS Management Console. Manual testing is prone to human error, not scalable, and can result in unintended issues. Because modules are used by multiple teams in the organization, it is important to ensure that any changes to the modules are extensively tested before the release. In this blog post, we will show you how to validate Terraform modules and how to automate the process using a Continuous Integration/Continuous Deployment (CI/CD) pipeline. Terraform Test Terraform test is a new testing framework for module authors to perform unit and integration tests for Terraform modules. Terraform test can create infrastructure as declared in the module, run validation against the infrastructure, and destroy the test resources regardless if the test passes or fails. Terraform test will also provide warnings if there are any resources that cannot be destroyed. Terraform test uses the same HashiCorp Configuration Language (HCL) syntax used to write Terraform modules. This reduces the burden for modules authors to learn other tools or programming languages. Module authors run the tests using the command terraform test which is available on Terraform CLI version 1.6 or higher. Module authors create test files with the extension *.tftest.hcl. These test files are placed in the root of the Terraform module or in a dedicated tests directory. The following elements are typically present in a Terraform tests file: Provider block: optional, used to override the provider configuration, such as selecting AWS region where the tests run. Variables block: the input variables passed into the module during the test, used to supply non-default values or to override default values for variables. Run block: used to run a specific test scenario. There can be multiple run blocks per test file, Terraform executes run blocks in order. In each run block you specify the command Terraform (plan or apply), and the test assertions. Module authors can specify the conditions such as: length(var.items) != 0. A full list of condition expressions can be found in the HashiCorp documentation. Terraform tests are performed in sequential order and at the end of the Terraform test execution, any failed assertions are displayed. Basic test to validate resource creation Now that we understand the basic anatomy of a Terraform tests file, let’s create basic tests to validate the functionality of the following Terraform configuration. This Terraform configuration will create an AWS CodeCommit repository with prefix name repo-. # main.tf variable "repository_name" { type = string } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." } Now we create a Terraform test file in the tests directory. See the following directory structure as an example: ├── main.tf └── tests └── basic.tftest.hcl For this first test, we will not perform any assertion except for validating that Terraform execution plan runs successfully. In the tests file, we create a variable block to set the value for the variable repository_name. We also added the run block with command = plan to instruct Terraform test to run Terraform plan. The completed test should look like the following: # basic.tftest.hcl variables { repository_name = "MyRepo" } run "test_resource_creation" { command = plan } Now we will run this test locally. First ensure that you are authenticated into an AWS account, and run the terraform init command in the root directory of the Terraform module. After the provider is initialized, start the test using the terraform test command. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... pass tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... pass Our first test is complete, we have validated that the Terraform configuration is valid and the resource can be provisioned successfully. Next, let’s learn how to perform inspection of the resource state. Create resource and validate resource name Re-using the previous test file, we add the assertion block to checks if the CodeCommit repository name starts with a string repo- and provide error message if the condition fails. For the assertion, we use the startswith function. See the following example: # basic.tftest.hcl variables { repository_name = "MyRepo" } run "test_resource_creation" { command = plan assert { condition = startswith(aws_codecommit_repository.test.repository_name, "repo-") error_message = "CodeCommit repository name ${var.repository_name} did not start with the expected value of ‘repo-****’." } } Now, let’s assume that another module author made changes to the module by modifying the prefix from repo- to my-repo-. Here is the modified Terraform module. # main.tf variable "repository_name" { type = string } resource "aws_codecommit_repository" "test" { repository_name = format("my-repo-%s", var.repository_name) description = "Test repository." } We can catch this mistake by running the the terraform test command again. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... fail ╷ │ Error: Test assertion failed │ │ on tests/basic.tftest.hcl line 9, in run "test_resource_creation": │ 9: condition = startswith(aws_codecommit_repository.test.repository_name, "repo-") │ ├──────────────── │ │ aws_codecommit_repository.test.repository_name is "my-repo-MyRepo" │ │ CodeCommit repository name MyRepo did not start with the expected value 'repo-***'. ╵ tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... fail Failure! 0 passed, 1 failed. We have successfully created a unit test using assertions that validates the resource name matches the expected value. For more examples of using assertions see the Terraform Tests Docs. Before we proceed to the next section, don’t forget to fix the repository name in the module (revert the name back to repo- instead of my-repo-) and re-run your Terraform test. Testing variable input validation When developing Terraform modules, it is common to use variable validation as a contract test to validate any dependencies / restrictions. For example, AWS CodeCommit limits the repository name to 100 characters. A module author can use the length function to check the length of the input variable value. We are going to use Terraform test to ensure that the variable validation works effectively. First, we modify the module to use variable validation. # main.tf variable "repository_name" { type = string validation { condition = length(var.repository_name) <= 100 error_message = "The repository name must be less than or equal to 100 characters." } } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." } By default, when variable validation fails during the execution of Terraform test, the Terraform test also fails. To simulate this, create a new test file and insert the repository_name variable with a value longer than 100 characters. # var_validation.tftest.hcl variables { repository_name = “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” } run “test_invalid_var” { command = plan } Notice on this new test file, we also set the command to Terraform plan, why is that? Because variable validation runs prior to Terraform apply, thus we can save time and cost by skipping the entire resource provisioning. If we run this Terraform test, it will fail as expected. ❯ terraform test tests/basic.tftest.hcl… in progress run “test_resource_creation”… pass tests/basic.tftest.hcl… tearing down tests/basic.tftest.hcl… pass tests/var_validation.tftest.hcl… in progress run “test_invalid_var”… fail ╷ │ Error: Invalid value for variable │ │ on main.tf line 1: │ 1: variable “repository_name” { │ ├──────────────── │ │ var.repository_name is “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” │ │ The repository name must be less than or equal to 100 characters. │ │ This was checked by the validation rule at main.tf:3,3-13. ╵ tests/var_validation.tftest.hcl… tearing down tests/var_validation.tftest.hcl… fail Failure! 1 passed, 1 failed. For other module authors who might iterate on the module, we need to ensure that the validation condition is correct and will catch any problems with input values. In other words, we expect the validation condition to fail with the wrong input. This is especially important when we want to incorporate the contract test in a CI/CD pipeline. To prevent our test from failing due introducing an intentional error in the test, we can use the expect_failures attribute. Here is the modified test file: # var_validation.tftest.hcl variables { repository_name = “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” } run “test_invalid_var” { command = plan expect_failures = [ var.repository_name ] } Now if we run the Terraform test, we will get a successful result. ❯ terraform test tests/basic.tftest.hcl… in progress run “test_resource_creation”… pass tests/basic.tftest.hcl… tearing down tests/basic.tftest.hcl… pass tests/var_validation.tftest.hcl… in progress run “test_invalid_var”… pass tests/var_validation.tftest.hcl… tearing down tests/var_validation.tftest.hcl… pass Success! 2 passed, 0 failed. As you can see, the expect_failures attribute is used to test negative paths (the inputs that would cause failures when passed into a module). Assertions tend to focus on positive paths (the ideal inputs). For an additional example of a test that validates functionality of a completed module with multiple interconnected resources, see this example in the Terraform CI/CD and Testing on AWS Workshop. Orchestrating supporting resources In practice, end-users utilize Terraform modules in conjunction with other supporting resources. For example, a CodeCommit repository is usually encrypted using an AWS Key Management Service (KMS) key. The KMS key is provided by end-users to the module using a variable called kms_key_id. To simulate this test, we need to orchestrate the creation of the KMS key outside of the module. In this section we will learn how to do that. First, update the Terraform module to add the optional variable for the KMS key. # main.tf variable "repository_name" { type = string validation { condition = length(var.repository_name) <= 100 error_message = "The repository name must be less than or equal to 100 characters." } } variable "kms_key_id" { type = string default = "" } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." kms_key_id = var.kms_key_id != "" ? var.kms_key_id : null } In a Terraform test, you can instruct the run block to execute another helper module. The helper module is used by the test to create the supporting resources. We will create a sub-directory called setup under the tests directory with a single kms.tf file. We also create a new test file for KMS scenario. See the updated directory structure: ├── main.tf └── tests ├── setup │ └── kms.tf ├── basic.tftest.hcl ├── var_validation.tftest.hcl └── with_kms.tftest.hcl The kms.tf file is a helper module to create a KMS key and provide its ARN as the output value. # kms.tf resource "aws_kms_key" "test" { description = "test KMS key for CodeCommit repo" deletion_window_in_days = 7 } output "kms_key_id" { value = aws_kms_key.test.arn } The new test will use two separate run blocks. The first run block (setup) executes the helper module to generate a KMS key. This is done by assigning the command apply which will run terraform apply to generate the KMS key. The second run block (codecommit_with_kms) will then use the KMS key ARN output of the first run as the input variable passed to the main module. # with_kms.tftest.hcl run "setup" { command = apply module { source = "./tests/setup" } } run "codecommit_with_kms" { command = apply variables { repository_name = "MyRepo" kms_key_id = run.setup.kms_key_id } assert { condition = aws_codecommit_repository.test.kms_key_id != null error_message = "KMS key ID attribute value is null" } } Go ahead and run the Terraform init, followed by Terraform test. You should get the successful result like below. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... pass tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... pass tests/var_validation.tftest.hcl... in progress run "test_invalid_var"... pass tests/var_validation.tftest.hcl... tearing down tests/var_validation.tftest.hcl... pass tests/with_kms.tftest.hcl... in progress run "create_kms_key"... pass run "codecommit_with_kms"... pass tests/with_kms.tftest.hcl... tearing down tests/with_kms.tftest.hcl... pass Success! 4 passed, 0 failed. We have learned how to run Terraform test and develop various test scenarios. In the next section we will see how to incorporate all the tests into a CI/CD pipeline. Terraform Tests in CI/CD Pipelines Now that we have seen how Terraform Test works locally, let’s see how the Terraform test can be leveraged to create a Terraform module validation pipeline on AWS. The following AWS services are used: AWS CodeCommit – a secure, highly scalable, fully managed source control service that hosts private Git repositories. AWS CodeBuild – a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. AWS CodePipeline – a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. Amazon Simple Storage Service (Amazon S3) – an object storage service offering industry-leading scalability, data availability, security, and performance. Terraform module validation pipeline In the above architecture for a Terraform module validation pipeline, the following takes place: A developer pushes Terraform module configuration files to a git repository (AWS CodeCommit). AWS CodePipeline begins running the pipeline. The pipeline clones the git repo and stores the artifacts to an Amazon S3 bucket. An AWS CodeBuild project configures a compute/build environment with Checkov installed from an image fetched from Docker Hub. CodePipeline passes the artifacts (Terraform module) and CodeBuild executes Checkov to run static analysis of the Terraform configuration files. Another CodeBuild project configured with Terraform from an image fetched from Docker Hub. CodePipeline passes the artifacts (repo contents) and CodeBuild runs Terraform command to execute the tests. CodeBuild uses a buildspec file to declare the build commands and relevant settings. Here is an example of the buildspec files for both CodeBuild Projects: # Checkov version: 0.1 phases: pre_build: commands: - echo pre_build starting build: commands: - echo build starting - echo starting checkov - ls - checkov -d . - echo saving checkov output - checkov -s -d ./ > checkov.result.txt In the above buildspec, Checkov is run against the root directory of the cloned CodeCommit repository. This directory contains the configuration files for the Terraform module. Checkov also saves the output to a file named checkov.result.txt for further review or handling if needed. If Checkov fails, the pipeline will fail. # Terraform Test version: 0.1 phases: pre_build: commands: - terraform init - terraform validate build: commands: - terraform test In the above buildspec, the terraform init and terraform validate commands are used to initialize Terraform, then check if the configuration is valid. Finally, the terraform test command is used to run the configured tests. If any of the Terraform tests fails, the pipeline will fail. For a full example of the CI/CD pipeline configuration, please refer to the Terraform CI/CD and Testing on AWS workshop. The module validation pipeline mentioned above is meant as a starting point. In a production environment, you might want to customize it further by adding Checkov allow-list rules, linting, checks for Terraform docs, or pre-requisites such as building the code used in AWS Lambda. Choosing various testing strategies At this point you may be wondering when you should use Terraform tests or other tools such as Preconditions and Postconditions, Check blocks or policy as code. The answer depends on your test type and use-cases. Terraform test is suitable for unit tests, such as validating resources are created according to the naming specification. Variable validations and Pre/Post conditions are useful for contract tests of Terraform modules, for example by providing error warning when input variables value do not meet the specification. As shown in the previous section, you can also use Terraform test to ensure your contract tests are running properly. Terraform test is also suitable for integration tests where you need to create supporting resources to properly test the module functionality. Lastly, Check blocks are suitable for end to end tests where you want to validate the infrastructure state after all resources are generated, for example to test if a website is running after an S3 bucket configured for static web hosting is created. When developing Terraform modules, you can run Terraform test in command = plan mode for unit and contract tests. This allows the unit and contract tests to run quicker and cheaper since there are no resources created. You should also consider the time and cost to execute Terraform test for complex / large Terraform configurations, especially if you have multiple test scenarios. Terraform test maintains one or many state files within the memory for each test file. Consider how to re-use the module’s state when appropriate. Terraform test also provides test mocking, which allows you to test your module without creating the real infrastructure. Conclusion In this post, you learned how to use Terraform test and develop various test scenarios. You also learned how to incorporate Terraform test in a CI/CD pipeline. Lastly, we also discussed various testing strategies for Terraform configurations and modules. For more information about Terraform test, we recommend the Terraform test documentation and tutorial. To get hands on practice building a Terraform module validation pipeline and Terraform deployment pipeline, check out the Terraform CI/CD and Testing on AWS Workshop. Authors Kevon Mayers Kevon Mayers is a Solutions Architect at AWS. Kevon is a Terraform Contributor and has led multiple Terraform initiatives within AWS. Prior to joining AWS he was working as a DevOps Engineer and Developer, and before that was working with the GRAMMYs/The Recording Academy as a Studio Manager, Music Producer, and Audio Engineer. He also owns a professional production company, MM Productions. Welly Siauw Welly Siauw is a Principal Partner Solution Architect at Amazon Web Services (AWS). He spends his day working with customers and partners, solving architectural challenges. He is passionate about service integration and orchestration, serverless and artificial intelligence (AI) and machine learning (ML). He has authored several AWS blog posts and actively leads AWS Immersion Days and Activation Days. Welly spends his free time tinkering with espresso machines and outdoor hiking. View the full article
  6. With “digital” now a key part of every aspect of business activity, the ability to deliver innovation at speed is do or die. In a highly competitive marketplace with growing expectations of customer experience, the timely release of top-notch digital products and services is no longer a bonus, it is fundamental to customer acquisition and retention. The smallest of delays might be the opportunity a competitor needs to gain market share and become the new disruptor. This makes empowering software teams to deliver flawless digital experiences at speed and scale of great strategic importance. To achieve this, organisations need feedback to determine if the latest development will either function as intended, break core functionality or, even worse, fail in a real-world setting. Software testing, once perceived as an impediment to speed and innovation, has now become a critical component of development activity. This represents a huge perspective shift from the classic, waterfall approach to software development in which testing was often left until the end. Now, the move to DevOps and Agile methodologies means testing is essential for accelerating the delivery of innovative applications without incurring unacceptable business risk. But Agile and DevOps initiatives are also pushing traditional testing methods to their breaking point. Automation and testing throughout the development lifecycle are new must-haves. Facing the challenges of continuous delivery Organizations are releasing much more frequently— even as often as multiple times per hour. Testers are expected to test each user story as soon as it’s implemented, even when that functionality interacts with other functionality which is evolving in parallel. Testing is also expected to alert the team when changes unintentionally impact the legacy functionality that was implemented, tested, and validated in previous weeks, months, or years. As organizations increasingly edge towards continuous delivery with automated delivery pipelines, go/no-go decisions will ultimately hinge upon test results. Test automation is required, but it’s not sufficient. Even with the most comprehensive automation, there is not enough time to test everything. When organizations adopt modern architectures and delivery methods, even teams experiencing clear test automation success will face challenges. For example, realistic tests cannot be created or executed fast enough or frequently enough. Teams are overwhelmed by a seemingly never-ending stream of false positives and incomplete tests—not to mention all the test maintenance required to address them. And finally, they can’t confidently tell business leaders whether a release candidate is fit to be released. It’s impossible to test every possible path through a modern business application every time an organization prepares for a new release. However, if the testing approach is reconsidered, it’s possible to achieve a thorough assessment of business risk with much less testing than most companies are doing today. Bringing continuous testing into play This is where continuous testing comes into play: providing the right feedback to the right stakeholder at the right time. For decades, testing was traditionally deferred until the end of the cycle. But by that point it was too late. There was little the team could feasibly do to address tester feedback except delay the release. With continuous testing, the focus is on accelerating innovation by providing actionable feedback at the point at which it’s possible to act on it. This helps development teams identify and fix issues as efficiently as possible. It accelerates delivery by supporting business leaders to determine when it’s reasonably safe to release. This is achieved by mastering—and going beyond—test automation. It requires aligning testing with business risks, ensuring that testing effectively assesses the end user experience, and providing the instant quality feedback required at different stages of the delivery pipeline. Therefore, organisations should implement a structured testing approach for new functionality. Development and testing teams must be able to know which environments have deployed new code, when new pieces of software need to be tested, what kind of coverage various tests provide, and what additional testing is needed. But test case portfolios typically suffer from the same problem — a large volume of tests with very little coverage of the application. This results in duplicate test cases and more maintenance to cover the ever-growing suite of tests - an impractical and cost-intensive process. By adopting risk-based test optimization, unneeded tests can be reduced while enhancing the risk coverage dramatically. It’s also possible to see which business-critical functionality has passed or failed, allowing organizations to make faster, more informed business decisions on releases. This results in maximum coverage of the highest impact business functionality, with the least effort. At its core, continuous testing is about evaluating the effect of change – using risk-based criteria to prioritize testing and evaluating business risk when determining when to ship. Therefore, the approach must also be driven by change, not only within technology, but people and processes too. Organizations must consider what they can do from all perspectives to facilitate the transition from their present state to a continuous testing process that accomplishes the organization's quality, speed, and efficiency goals. By adopting continuous testing, teams can disclose significant issues early in each cycle, reducing time-to-release by over half the time it traditionally takes, and reducing production errors dramatically in the process, too. Even better still, testing costs can often be tangibly saved and measured - a significant ROI given the current economic environment. A strategic imperative to digital success The bottom line is that testing is now a strategic imperative to digital success. Continuous testing helps test smarter for rapid insight into what matters most to the business. It repositions testing from a bottleneck to a trusted coach that helps push limits and surge ahead of the competition. Along with a better product, it’s possible to uncover development areas where incredible levels of productivity gains can be achieved and financial savings can be made. Without it, you might just find that your competitors have gained the upper hand. Link! This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
  7. Artificial intelligence (AI) and machine learning (ML) can play a transformative role across the software development lifecycle, with a special focus on enhancing continuous testing (CT). CT is especially critical in the context of continuous integration/continuous deployment (CI/CD) pipelines, where the need for speed and efficiency must be balanced with the demands for quality and […] View the full article
  8. A gentle introduction to unit testing, mocking and patching for beginners Continue reading on Towards Data Science » View the full article
  9. The DataFrame equality test functions were introduced in Apache Spark™ 3.5 and Databricks Runtime 14.2 to simplify PySpark unit testing. The full set o... View the full article
  10. Developers need feedback on their work so that they know whether their code is helping the business. They should have “multiple feedback loops to ensure that high-quality software gets delivered to users”[1]. Development teams also need to review their feedback loops so that they can maximize useful feedback. Testing provides feedback and should be continuous so that it provides quick and continuous feedback to the developers. Testing must always be happening as part of the development process. There are many different types of testing. View the full article
  11. How do you know if you can run terraform apply to your infrastructure without negatively affecting critical business applications? You can run terraform validate and terraform plan to check your configuration, but will that be enough? Whether you’ve updated some HashiCorp Terraform configuration or a new version of a module, you want to catch errors quickly before you apply any changes to production infrastructure. In this post, I’ll discuss some testing strategies for HashiCorp Terraform configuration and modules so that you can terraform apply with greater confidence. As a HashiCorp Developer Advocate, I’ve compiled some advice to help Terraform users learn how infrastructure tests fit into their organization’s development practices, the differences in testing modules versus configuration, and approaches to manage the cost of testing. I included a few testing examples with Terraform’s native testing framework. No matter which tool you use, you can generalize the approaches outlined in this post to your overall infrastructure testing strategy. In addition to the testing tools and approaches in this post, you can find other perspectives and examples in the references at the end. The testing pyramid In theory, you might decide to align your infrastructure testing strategy with the test pyramid, which groups tests by type, scope, and granularity. The testing pyramid suggests that engineers write fewer tests in the categories at the top of the pyramid, and more tests in the categories at the bottom. Higher-level tests in the pyramid take more time to run and cost more due to the higher number of resources you have to configure and create. In reality, your tests may not perfectly align with the pyramid shape. The pyramid offers a common framework to describe what scope a test can cover to verify configuration and infrastructure resources. I’ll start at the bottom of the pyramid with unit tests and work my way up the pyramid to end-to-end tests. Manual testing involves spot-checking infrastructure for functionality and can have a high cost in time and effort. Linting and formatting While not on the test pyramid, you often encounter tests to verify the hygiene of your Terraform configuration. Use terraform fmt -check and terraform validate to format and validate the correctness of your Terraform configuration. When you collaborate on Terraform, you may consider testing the Terraform configuration for a set of standards and best practices. Build or use a linting tool to analyze your Terraform configuration for specific best practices and patterns. For example, a linter can verify that your teammate defines a Terraform variable for an instance type instead of hard-coding the value. Unit tests At the bottom of the pyramid, unit tests verify individual resources and configurations for expected values. They should answer the question, “Does my configuration or plan contain the correct metadata?” Traditionally, unit tests should run independently, without external resources or API calls. For additional test coverage, you can use any programming language or testing tool to parse the Terraform configuration in HashiCorp Configuration Language (HCL) or JSON and check for statically defined parameters, such as provider attributes with defaults or hard-coded values. However, none of these tests verify correct variable interpolation, list iteration, or other configuration logic. As a result, I usually write additional unit tests to parse the plan representation instead of the Terraform configuration. Configuration parsing does not require active infrastructure resources or authentication to an infrastructure provider. However, unit tests against a Terraform plan require Terraform to authenticate to your infrastructure provider and make comparisons. These types of tests overlap with security testing done via policy as code because you check attributes in Terraform configuration for the correct values. For example, your Terraform module parses the IP address from an AWS instance’s DNS name and outputs a list of IP addresses to a local file. At a glance, you don’t know if it correctly replaces the hyphens and retrieves the IP address information. variable "services" { type = map(object({ node = string kind = string })) description = "List of services and their metadata" } variable "service_kind" { type = string description = "Service kind to search" } locals { ip_addresses = toset([ for service, service_data in var.services : replace(replace(split(".", service_data.node)[0], "ip-", ""), "-", ".") if service_data.kind == var.service_kind ]) } resource "local_file" "ip_addresses" { content = jsonencode(local.ip_addresses) filename = "./${var.service_kind}.hcl" }You could pass an example set of services and run terraform plan to manually check that your module retrieves only the TCP services and outputs their IP addresses. However, as you or your team adds to this module, you may break the module’s ability to retrieve the correct services and IP addresses. Writing unit tests ensures that the logic of searching for services based on kind and retrieving their IP addresses remains functional throughout a module’s lifecycle. This example uses two sets of unit tests written in terraform test to check the logic generating the service’s IP addresses for each service kind. The first set of tests verify the file contents will have two IP addresses for TCP services, while the second set of tests check that the file contents will have one IP address for the HTTP service: variables { services = { "service_0" = { kind = "tcp" node = "ip-10-0-0-0" }, "service_1" = { kind = "http" node = "ip-10-0-0-1" }, "service_2" = { kind = "tcp" node = "ip-10-0-0-2" }, } } run "get_tcp_services" { variables { service_kind = "tcp" } command = plan assert { condition = jsondecode(local_file.ip_addresses.content) == ["10.0.0.0", "10.0.0.2"] error_message = "Parsed `tcp` services should return 2 IP addresses, 10.0.0.0 and 10.0.0.2" } assert { condition = local_file.ip_addresses.filename == "./tcp.hcl" error_message = "Filename should include service kind `tcp`" } } run "get_http_services" { variables { service_kind = "http" } command = plan assert { condition = jsondecode(local_file.ip_addresses.content) == ["10.0.0.1"] error_message = "Parsed `http` services should return 1 IP address, 10.0.0.1" } assert { condition = local_file.ip_addresses.filename == "./http.hcl" error_message = "Filename should include service kind `http`" } }I set some mock values for a set of services in the services variable. The tests include command = plan to check attributes in the Terraform plan without applying any changes. As a result, the unit tests do not create the local file defined in the module. The example demonstrates positive testing, where I test the input works as expected. Terraform’s testing framework also supports negative testing, where you might expect a validation to fail for an incorrect input. Use the expect_failures attribute to capture the error. If you do not want to use the native testing framework in Terraform, you can use HashiCorp Sentinel, a programming language, or your configuration testing tool of choice to parse the plan representation in JSON and verify your Terraform logic. Besides testing attributes in the Terraform plan, unit tests can validate: Number of resources or attributes generated by for_each or count Values generated by for expressions Values generated by built-in functions Dependencies between modules Values associated with interpolated values Expected variables or outputs marked as sensitive If you wish to unit test infrastructure by simulating a terraform apply without creating resources, you can choose to use mocks. Some cloud service providers offer community tools that mock the APIs for their service offerings. Beware that not all mocks accurately reflect the behavior and configuration of their target API. Overall, unit tests run very quickly and provide rapid feedback. As an author of a Terraform module or configuration, you can use unit tests to communicate the expected values of configuration to other collaborators in your team and organization. Since unit tests run independently of infrastructure resources, they have a virtually zero cost to run frequently. Contract tests At the next level from the bottom of the pyramid, contract tests check that a configuration using a Terraform module passes properly formatted inputs. Contract tests answer the question, “Does the expected input to the module match what I think I should pass to it?” Contract tests ensure that the contract between a Terraform configuration’s expected inputs to a module and the module’s actual inputs has not been broken. Most contract testing in Terraform helps the module consumer by communicating how the author expects someone to use their module. If you expect someone to use your module in a specific way, use a combination of input variable validations, preconditions, and postconditions to validate the combination of inputs and surface the errors. For example, use a custom input variable validation rule to ensure that an AWS load balancer’s listener rule receives a valid integer range for its priority: variable "listener_rule_priority" { type = number default = 1 description = "Priority of listener rule between 1 to 50000" validation { condition = var.listener_rule_priority > 0 && var.listener_rule_priority < 50000 error_message = "The priority of listener_rule must be between 1 to 50000." } }As a part of input validation, you can use Terraform’s rich language syntax to validate variables with an object structure to enforce that the module receives the correct fields. This module example uses a map to represent a service object and its expected attributes: variable "services" { type = map(object({ node = string kind = string })) description = "List of services and their metadata" }In addition to custom validation rules, you can use preconditions and postconditions to verify specific resource attributes defined by the module consumer. For example, you cannot use a validation rule to check if the address blocks overlap. Instead, use a precondition to verify that your IP addresses do not overlap with networks in HashiCorp Cloud Platform (HCP) and your AWS account: resource "hcp_hvn" "main" { hvn_id = var.name cloud_provider = "aws" region = local.hcp_region cidr_block = var.hcp_cidr_block lifecycle { precondition { condition = var.hcp_cidr_block != var.vpc_cidr_block error_message = "HCP HVN must not overlap with VPC CIDR block" } } }Contract tests catch misconfigurations in modules before applying them to live infrastructure resources. You can use them to check for correct identifier formats, naming standards, attribute types (such as private or public networks), and value constraints such as character limits or password requirements. If you do not want to use custom conditions in Terraform, you can use HashiCorp Sentinel, a programming language, or your configuration testing tool of choice. Maintain these contract tests in the module repository and pull them into each Terraform configuration that uses the module using a CI framework. When someone references the module in their configuration and pushes a change to version control, the contract tests run against the plan representation before you apply. Unit and contract tests may require extra time and effort to build, but they allow you to catch configuration errors before running terraform apply. For larger, more complex configurations with many resources, you should not manually check individual parameters. Instead, use unit and contract tests to quickly automate the verification of important configurations and set a foundation for collaboration across teams and organizations. Lower-level tests communicate system knowledge and expectations to teams that need to maintain and update Terraform configuration. Integration tests With lower-level tests, you do not need to create external resources to run them, but the top half of the pyramid includes tests that require active infrastructure resources to run properly. Integration tests check that a configuration using a Terraform module passes properly formatted inputs. They answer the question, “Does this module or configuration create the resources successfully?” A terraform apply offers limited integration testing because it creates and configures resources while managing dependencies. You should write additional tests to check for configuration parameters on the active resource. In my example, I add a new terraform test to apply the configuration and create the file. Then, I verify that the file exists on my filesystem. The integration test creates the file using a terraform apply and removes the file after issuing a terraform destroy. run "check_file" { variables { service_kind = "tcp" } command = apply assert { condition = fileexists("${var.service_kind}.hcl") error_message = "File `${var.service_kind}.hcl` does not exist" } }Should you verify every parameter that Terraform configures on a resource? You could, but it may not be the best use of your time and effort. Terraform providers include acceptance tests that resources properly create, update, and delete with the right configuration values. Instead, use integration tests to verify that Terraform outputs include the correct values or number of resources. They also test infrastructure configuration that can only be verified after a terraform apply, such as invalid configurations, nonconformant passwords, or results of for_each iteration. When choosing an integration testing framework outside of terraform test, consider the existing integrations and languages within your organization. Integration tests help you determine whether or not to update your module version and ensure they run without errors. Since you have to set up and tear down the resources, you will find that integration tests can take 15 minutes or more to complete, depending on the resource. As a result, implement as much unit and contract testing as possible to fail quickly on wrong configurations instead of waiting for resources to create and delete. End-to-end tests After you apply your Terraform changes to production, you need to know whether or not you’ve affected end-user functionality. End-to-end tests answer the question, “Can someone use the infrastructure system successfully?” For example, application developers and operators should still be able to retrieve a secret from HashiCorp Vault after you upgrade the version. End-to-end tests can verify that changes did not break expected functionality. To check that you’ve upgraded Vault properly, you can create an example secret, retrieve the secret, and delete it from the cluster. I usually write an end-to-end test using a Terraform check to verify that any updates I make to a HashiCorp Cloud Platform (HCP) Vault cluster return a healthy, unsealed status: check "hcp_vault_status" { data "http" "vault_health" { url = "${hcp_vault_cluster.main.vault_public_endpoint_url}/v1/sys/health" } assert { condition = data.http.vault_health.status_code == 200 || data.http.vault_health.status_code == 473 error_message = "${data.http.vault_health.url} returned an unhealthy status code" } }Besides a check block, you can write end-to-end tests in any programming language or testing framework. This usually includes an API call to check an endpoint after creating infrastructure. End-to-end tests usually depend on an entire system, including networks, compute clusters, load balancers, and more. As a result, these tests usually run against long-lived development or production environments. Testing Terraform modules When you test Terraform modules, you want enough verification to ensure a new, stable release of the module for use across your organization. To ensure sufficient test coverage, write unit, contract, and integration tests for modules. A module delivery pipeline starts with a terraform plan and then runs unit tests (and if applicable, contract tests) to verify the expected Terraform resources and configurations. Then, run terraform apply and the integration tests to check that the module can still run without errors. After running integration tests, destroy the resources and release a new module version. The Terraform Cloud private registry offers a branch-based publishing workflow that includes automated testing. If you use terraform test for your modules, the private registry automatically runs those tests before releasing a module. When testing modules, consider the cost and test coverage of module tests. Conduct module tests in a different project or account so that you can independently track the cost of your module testing and ensure module resources do not overwrite environments. On occasion, you can omit integration tests because of their high financial and time cost. Spinning up databases and clusters can take half an hour or more. When you’re constantly pushing changes, you might even create multiple test instances. To manage the cost, run integration tests after merging feature branches and select the minimum number of resources you need to test the module. If possible, avoid creating entire systems. Module testing applies mostly to immutable resources because of its create and delete sequence. The tests cannot accurately represent the end state of brownfield (existing) resources because they do not test updates. As a result, it provides confidence in the module’s successful usage but not necessarily in applying module updates to live infrastructure environments. Testing Terraform configuration Compared to modules, Terraform configuration applied to environments should include end-to-end tests to check for end-user functionality of infrastructure resources. Write unit, integration, and end-to-end tests for configuration of active environments. The unit tests do not need to cover the configuration in modules. Instead, focus on unit testing any configuration not associated with modules. Integration tests can check that changes successfully run in a long-lived development environment, and end-to-end tests verify the environment’s initial functionality. If you use feature branching, merge your changes and apply them to a production environment. In production, run end-to-end tests against the system to confirm system availability. Failed changes to active environments will affect critical business systems. In its ideal form, a long-running development environment that accurately mimics production can help you catch potential problems. From a practical standpoint, you may not always have a development environment that fully replicates a production environment because of cost concerns and the difficulty of replicating user traffic. As a result, you usually run a scaled-down version of production to save money. The difference between development and production will affect the outcome of your tests, so be aware of which tests may be more important for flagging errors or disruptive to run. Even if configuration tests have less accuracy in development, they can still catch a number of errors and help you practice applying and rolling back changes before production. Conclusion Depending on your system’s cost and complexity, you can apply a variety of testing strategies to Terraform modules and configuration. While you can write tests in your programming language or testing framework of choice, you can also use the testing frameworks and constructs built into Terraform for unit, contract, integration, and end-to-end testing. Test type Use case Terraform configuration Unit test Modules, configuration terraform test Contract test Modules Input variable validation Preconditions/postconditions Integration test Modules, configuration terraform test End-to-end test Configuration Check blocks This post has explained the different types of tests and how you can apply them to catch errors in Terraform configurations and modules before production, and how to incorporate them into pipelines. Your Terraform testing strategy does not need to be a perfect test pyramid. At the very least, automate some tests to reduce the time you need to manually verify changes and check for errors before they reach production. Check out our tutorial on how to Write Terraform tests to learn about writing Terraform tests for unit and integration testing and running them in the Terraform Cloud private module registry. For more information on using checks, Use checks to validate infrastructure offers a more in-depth example. If you want to learn about writing tests for security and policy, review our documentation on Sentinel. View the full article
  12. Here are some top best practices to keep in mind as you build out a software testing practice at your organization. View the full article
  13. Today at HashiConf, we are excited to announce new capabilities for HashiCorp Terraform and Terraform Cloud to improve developer velocity, code quality, and infrastructure cost management. The new Terraform and Terraform Cloud announcements include the following: Terraform test framework (GA) helps produce higher-quality modules Test-integrated module publishing (beta) streamlines the testing and publishing process Generated module tests (beta) help module authors get started in seconds Enhanced editor validation in Visual Studio Code (GA) makes it easier to find and resolve errors Stacks (private preview) simplifies infrastructure provisioning and management at scale Ephemeral workspaces (GA) help optimize infrastructure spend View the full article
  14. Jenkins is a popular open-source CI/CD that helps automate various aspects of software development, including building, testing, and deploying applications. Jenkins is highly extensible, with over 1,000 available plugins, which help to integrate with various third-party tools and technologies. Consider a scenario where you're working on a large software project with multiple developers. Testing each and every change manually can be time-consuming and prone to human error. This is where Jenkins test cases can come in handy. View the full article
  15. The sheer number of tools, libraries, and frameworks can give many programmers a headache. Moreover, complex designs often require many of these components to work together or at least not interfere with each other. Database versioning — and integration tests during which we conduct it — are great examples of such problematic cooperation. There is also the aspect of the persistence layer in our code, which will be the subject of the above-mentioned tests. View the full article
  16. The post 5 Online Tools for Generating and Testing Cron Jobs for Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides .As a Linux system administrator, you can perform time-based scheduling of jobs/tasks using online cron job services or Cron, a powerful utility available in Unix/Linux The post 5 Online Tools for Generating and Testing Cron Jobs for Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
  17. Today, we are excited to announce the general availability of HashiCorp Terraform 1.6, which is ready for download and immediately available for use in Terraform Cloud. This release adds numerous features to enhance developer productivity and practitioner workflows, including a powerful new testing framework, improvements to config-driven import, enhancements for Terraform Cloud CLI workflows, and more. Let’s take a look at what’s new... View the full article
  18. Tricentis added a Virtual Mobile Grid service to its portfolio to make it simpler to test mobile applications at scale. View the full article
  19. ForAllSecure provided early access to dynamic SBOM generation and SCA validation capabilities within its Mayhem Security automated code and API testing tool. View the full article
  20. Developers using SAM CLI to author their serverless application with Lambda functions can now create and use Lambda test events to test their function code. Test events are JSON objects that mock the structure of requests emitted by AWS services to invoke a Lambda function and return an execution result, serving to validate a successful operation or to identify errors. Previously, Lambda test events were only available in the Lambda console. With this launch, developers using SAM CLI can create and access a test event from their AWS account and share it with other team members. View the full article
  21. In the dynamic landscape of software development, collaborations often lead to innovative solutions that simplify complex challenges. The Docker and Microcks partnership is a prime example, demonstrating how the relationship between two industry leaders can reshape local application development. This article delves into the collaborative efforts of Docker and Microcks, spotlighting the emergence of the Microcks Docker Desktop Extension and its transformative impact on the development ecosystem... View the full article
  22. If you ever wonder why continuous testing is so important, think about the following: In the past, software testing was typically done after the code was written and sent to the QA department for independent testing. When bugs were discovered, the code was returned to the developers for correction. While this testing method worked, it […] The post The Future of Continuous Testing in CI/CD appeared first on DevOps.com. View the full article
  23. A survey of CEOs and IT professionals involved in application testing finds a significant gap in terms of how acceptable it is to release software that has not been properly tested. The survey, conducted by the market research firm Censuswide on behalf of Leapwork, a test automation platform provider, polled 480 CEOs in the U.S. […] The post Survey Warns of Looming Software Testing Crisis appeared first on DevOps.com. View the full article
  24. For the past two years, companies have leaned into digital transformation projects to keep their digital offerings at pace with their in-person operations. Rather than become a stopgap, this digital approach became the new standard for the modern employee and customer experience. It also creates friction between a company’s development team and its user base. […] The post How Automated Testing Closes the Development and Delivery Gap appeared first on DevOps.com. View the full article
  25. Buildkite has added an analytics tool to its continuous integration/continuous delivery (CI/CD) platform that identifies flaky tests. Buildkite’s co-CEO Keith Pitt said the company’s Test Analytics tool enables continuous performance monitoring and real-time insights for test suites that, when not properly written, only serve to waste DevOps teams’ time. Rather than eliminating those tests, most […] The post Buildkite Adds Analytics Tools to Identify Flaky App Tests appeared first on DevOps.com. View the full article
  • Forum Statistics

    67.4k
    Total Topics
    65.3k
    Total Posts
×
×
  • Create New...