Jump to content

Search the Community

Showing results for tags 'networking'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. I am running my app on 10.3.23.23:3000 but the i would like to access at 80 port instead 3000 but the same time i dont want to change app config. what are ohter ways we have If you want your application to be accessible on port 80 without changing its configuration (which currently runs on port 3000), you can use a reverse proxy or port forwarding. These methods allow you to keep your application running on its original port while externally it appears to be running on port 80. Here are a few approaches: 1. Using a Reverse Proxy A reverse proxy server sits in front of your application and forwards client requests to the application backend. This is a common way to expose applications running on non-standard ports to standard ports like 80 (HTTP) or 443 (HTTPS). Nginx Here’s an example configuration for Nginx. If you don’t have Nginx installed, you’ll need to install it first. After configuring, restart or reload Nginx for the changes to take effect. Apache If you’re using Apache as a web server, you can configure it to act as a proxy using mod_proxy and mod_proxy_http. Here’s an example configuration: Ensure that mod_proxy and mod_proxy_http are enabled, then restart Apache. 2. Using iptables for Port Forwarding If you’re running a Linux server and prefer not to use a reverse proxy, you can use iptables for port forwarding. This method routes traffic coming into port 80 to port 3000. Here’s how to forward port 80 to port 3000 using iptables: To make the rule persistent across reboots, you’ll need to install iptables-persistent or manually add the rule to your startup scripts, depending on your Linux distribution. $ sudo apt-get update $ sudo apt-get install iptables-persistent $ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000 Save Your Rules Manually: If you've already installed iptables-persistent or if you need to save new rules after installation, you can save your current iptables rules by executing: $ sudo netfilter-persistent save 3. Using a Load Balancer If you’re deploying your application in a cloud environment (like AWS, GCP, Azure), you can use their load balancer services to forward traffic from port 80 to port 3000. You’ll need to set up a load balancer, point it to your application, and configure the listeners to forward traffic from port 80 to port 3000. This approach also gives you the benefit of scaling, SSL termination, and more, depending on the cloud provider’s offerings. Each of these methods allows you to expose your application on port 80 without changing the application’s configuration. Choose the one that best fits your environment and needs. The post How to migrate all the traffic from one to another port appeared first on DevOpsSchool.com. View the full article
  2. Switches and routers are both critical components of networking infrastructure, but they serve different functions within a network. Understanding the difference between them is fundamental to grasping how networks handle data and connect devices. Here’s a comparison highlighting their primary differences: 1. Functionality: Switches are networking devices that connect multiple devices together on a Local Area Network (LAN). They operate at the data link layer (Layer 2) of the OSI model and use MAC addresses to forward data to the correct device within the LAN. Routers connect multiple networks together, such as a LAN to a Wide Area Network (WAN) or two LANs. They operate at the network layer (Layer 3) and use IP addresses to route data between networks, making decisions based on the most efficient path for data to travel. 2. Network Segments: Switches create a single network segment and work within a single network to manage and switch data packets among connected devices. Routers are used to segment and organize the network into different broadcast domains, preventing broadcasts from reaching every part of the network, which increases efficiency and security. 3. Traffic Handling: Switches handle traffic within the same network. They can significantly increase a network’s efficiency by sending data only to the intended recipient device within the LAN. Routers handle and direct outgoing and incoming traffic between different networks, managing data packets among devices that might not be on the same network. 4. Performance: Switches can enhance performance within a LAN by reducing unnecessary data transmission through packet switching, which sends data directly to the device it’s addressed to. Routers can affect the performance of a network based on the route it chooses for data packets. Advanced routing algorithms help in optimizing the speed and efficiency of data transmission across networks. 5. Security: Switches offer some level of security by segregating traffic between devices within the LAN. Managed switches provide advanced features like VLANs which can be used to further segment and secure the network. Routers play a significant role in network security. They can provide firewall protection, filter traffic, and perform network address translation (NAT), which hides the IP addresses of devices on a local network from the outside world. 6. Use Cases: Switches are used to build a network infrastructure within a home, office, or any establishment requiring multiple devices to connect within a single LAN. Routers are essential for connecting a LAN to the internet or other LANs, making them indispensable for any network that needs access to external networks. Switches vs Routers FeatureSwitchRouterArea of ExpertiseConnects devices within a single networkConnects multiple networksData Delivery MethodUses MAC addressesUses IP addressesOSI Model LayerLayer 2 (Data Link Layer)Layer 3 (Network Layer)FunctionDirects data packets to specific devicesRoutes data packets between different networksAnalogyMailroom sorter within a buildingPost office sending mail to different locationsConnectivityUsually wiredWired or wirelessRouting TableNoYesServicesLimited (e.g., basic security)More advanced (e.g., NAT, QoS)CostTypically less expensiveTypically more expensive The post Networking Fundamental: Difference between Switches & Routers appeared first on DevOpsSchool.com. View the full article
  3. Cilium is an eBPF-based project that was originally created by Isovalent, open-sourced in 2015, and has become the center of gravity for cloud-native networking and security. With 700 active contributors and more than 18,000 GitHub stars, Cilium is the second most active project in the CNCF (behind only Kubernetes), where in Q4 2023 it became the first project to graduate in the cloud-native networking category. A week ahead of the KubeCon EU event where Cilium and the recent 1.15 release are expected to be among the most popular topics with attendees, I caught up with Nico Vibert, Senior Staff Technical Engineer at Isovalent, to learn more about why this is just the beginning for the Cilium project. Q: Cilium recently became the first CNCF graduating “cloud native networking” project — why do you think Cilium was the right project at the right time in terms of the next-generation networking requirements of cloud-native? View the full article
  4. In the dynamic realm of software development and deployment, Docker has emerged as a cornerstone technology, revolutionizing the way developers package, distribute, and manage applications. Docker simplifies the process of handling applications by containerizing them, ensuring consistency across various computing environments. A critical aspect of Docker that often puzzles many is Docker networking. It’s an essential feature, enabling containers to communicate with each other and the outside world. This ultimate guide aims to demystify Docker networking, offering you tips, tricks, and best practices to leverage Docker networking effectively. Understanding Docker Networking Basics Docker networking allows containers to communicate with each other and with other networks. Docker provides several network drivers, each serving different use cases: View the full article
  5. In the ever-evolving world of cloud computing and containerization, Kubernetes has emerged as the frontrunner in orchestrating containerized applications. As a Chief Architect with over two decades in the industry, I've witnessed firsthand the transformative impact Kubernetes has on application deployment and management. This article aims to demystify the complex world of Kubernetes networking, a critical component for the seamless operation of containerized applications. Kubernetes networking can be complex, but it's essential for ensuring that containers can communicate efficiently both internally and externally. The networking model in Kubernetes is designed to be flat, which means that containers can communicate with each other without the need for NAT (Network Address Translation). View the full article
  6. The factory-precaching-cli tool is a containerized Go binary publicly available in the Telco RAN tools container image. This blog shows how the factory-precaching-cli tool can drastically reduce the OpenShift installation time when using the Red Hat GitOps Zero Touch Provisioning (ZTP) workflow. This approach becomes very significant when dealing with low bandwidth networks, either when connecting to a public or disconnected registry. View the full article
  7. Being able to enforce airtight application security at the cluster-wide level has been a popular ask from cluster administrators. Key admin user stories include ... View the full article
  8. AWS Network Load Balancer (NLB) now supports Availability Zone DNS affinity, disable connection termination for unhealthy targets, and UDP connection termination by default. View the full article
  9. Excellent overview about the various network protocols from ByteByteGo https://blog.bytebytego.com/p/ep80-explaining-8-popular-network
  10. Vitria Technology's Andrew Colby discusses why AIOps, artificial intelligence and machine learning (AI/ML) are key capabilities for organizations to continue providing the level of performance and reliability customers expect. View the full article
  11. In this post, we’ll illustrate an enterprise IT scenario in which VPCs are overseen by a central network team, including configuration of VPC resources such as IP allocation, route policies, internet gateways, NAT gateways, security groups, peering, and on-premises connectivity. The network account, which serves as the owner of the centralized VPC, shares subnets with a participant application account managed by a platform team, both of which are part of the same organization. In this use case, the platform team owns the management of Amazon EKS cluster. We’ll also cover the key considerations of using shared subnets in Amazon EKS... View the full article
  12. In this post, we’ll explore how to publish and consume services running on Amazon Elastic Container Service (Amazon ECS) and AWS Lambda, as Amazon VPC Lattice services. For an introduction to Amazon VPC Lattice, please read the documentation here. One main reason customer experience a lower velocity of innovation, is the complexity they deal with while trying to ensure that their applications can communicate in a simple and secure way. Amazon VPC Lattice is a powerful application networking service that removes this complexity, and gives developers a simpler user experience to share their application and connect with dependencies without having to setup any of the underlying network connectivity across Amazon Virtual Private Clouds (Amazon VPCs), AWS accounts, and even overlapping IP addressing. It handles both application layer load balancing and network connectivity, so that developers can focus on their applications, instead of infrastructure... View the full article
  13. Today, AWS IoT Core for LoRaWAN announces the general availability of public network support for LoRaWAN-based Internet of Things (IoT) devices. With this update, you can now connect your LoRaWAN devices to the cloud using publicly available LoRaWAN networks provided by Everynet, a LoRaWAN network operator, without deploying and operating a private LoRaWAN network. The public LoRaWAN network is provided as a service and operated by Everynet, and by adding this public network support, customers can choose from within the AWS console to use Everynet's network. View the full article
  14. Having OpenShift (or Kubernetes) cluster nodes able to learn routes via BGP is a popular ask. View the full article
  15. In the realm of cloud computing, Azure stands as a behemoth, offering a multitude of services to cater to the diverse needs of businesses and developers. As you delve deeper into the Azure ecosystem, you may come across a seemingly enigmatic entity – the 168.63.129.16 IP address. What exactly is it, and why is it […] The article What is Azure’s Special 168.63.129.16 IP Address? appeared first on Build5Nines. View the full article
  16. OpenShift Virtualization is Red Hat's solution for companies trending toward modernization by adopting a containerized architecture for their applications, but find virtualization remains a necessary part of their data center deployment strategy. View the full article
  17. Today, we’re excited to announce the native support for enforcing Kubernetes network policies with Amazon VPC Container Networking Interface (CNI) Plugin. You can now use Amazon VPC CNI to implement both pod networking and network policies to secure the traffic in your Kubernetes clusters. Native support for network policies has been one of the most requested features on our containers roadmap... View the full article
  18. In just a few days, Google Cloud Next returns to San Francisco as a large, in-person, three-day event. There, you’ll learn all about the technologies you need to build, connect, and secure all your cloud-first, Kubernetes, and AI/ML workloads. You’ll gain hands-on experience on the latest cloud networking and network security technologies, and you’ll expand your peer network. If your role involves designing cloud networks, implementing cybersecurity, or you just want to keep your tabs on the latest network connectivity and security trends, Next ‘23 is the place for you. Here is a list of specially curated content for the networking professional. Click on the links to add these sessions to your schedule. Spotlight KeynoteSPTL 202 What’s next for architects and IT professionals How do you ensure you’ve got the right infrastructure to power all the applications that enable your business? How are you preparing to innovate with generative AI and the latest in ML? How are you deploying modern apps with containers? How are you addressing your data sovereignty requirements? Get your questions answered in this Spotlight Keynote showcasing the latest Infrastructure advancements and hear from customers that are achieving their cost, sustainability, reliability, and security goals on Google’s planet-scale infrastructure. Breakout sessionsARC 201 What's new in cloud networking: AI-optimized infrastructure, ML-powered security, and more Whether you’re running AI/ML, data and analytics, web, media, and HPC workloads, cloud networking is delivering simplified and more resilient services to help you connect and secure the workloads. Join this session with our product team for the latest innovations in cloud networking and meet Walmart, Sharechat, and Palo Alto Networks over a fireside chat. ARC 202 Design secure enterprise networks for a multi-cloud world With the shift to a hybrid workforce and distributed application deployment across multiple cloud providers, enterprises are having to build complex network architectures for reliable any-to-any connectivity, enabling optimal user-to-app and app-to-app experience. Get a deep dive from our product team on enterprise cloud network design with Priceline, and Palo Alto Networks. ARC 203 Elevate end user experience with planet scale Google Cloud CDN Delivering engaging content while ensuring quality of experience (QoE) at global scale is becoming a critical differentiator. Discover how Google Cloud helps enable scalable content delivery with our planet-scale network and hear from SonyCrunchyroll on how they’re leveraging Google Cloud. SEC 203 Network security fundamentals: Creating layered network defenses with built-in tools Security has become a top of mind concern for line-of-business executives and practitioners alike. Learn how to implement network security and enforce security controls and hear from Wix on how they are using Google Cloud’s built-in network security tools. SEC 301 Innovations for securing workloads with Google Cloud next-generation firewalls Firewalls are a critical component of any network security. In the cloud, organizations want advanced threat protection from cloud-first and best-of-breed third-party solutions. Hear about the latest in next-generation firewall innovations from Google, and how our customers, McKesson Inc., CoverMyMeds, and Salesforce implemented advanced threat protection to protect their workloads in their organizations. Network live and in-personEveryone is excited to be back in person. Support your fellow customer speakers by stopping by the Showcase Theater at Moscone South: 8/29 Tue @ 12:30 - 1 PM: How Walmart simplified multi-cloud adoption with Cross-Cloud Interconnect Walmart runs a seamless platform across multiple cloud providers to accelerate innovations. Join this session featuring Walmart and Google speakers to learn how Walmart built a global network to connect, secure and consume multi-cloud services through Cross-Cloud Interconnect. 8/30 Wed @ 4:30 - 5 PM: How Broadcom blocks DDoS attacks with Cloud Armor Broadcom is a global provider of enterprise security solutions and have migrated their flagship cybersecurity solutions to Google Cloud. Join this session to hear from Symantec/Broadcom on how Cloud Armor with ML-powered protection prevented intense DDoS attacks, and how it continues to mitigate these threats, while strengthening Broadcom's infrastructure. Add these exciting sessions to your schedule. We’re looking forward to seeing you in San Francisco!
  19. Defining IP Address spaces using CIDR notation for network and subnets is a common task for network engineers, DevOps Engineers, and Site Reliability Engineers (SRE) when configuring infrastructure deployments using HashiCorp Terraform. CIDR notation is used uniquely to declare the IP Address space for an entire network, as well as to divide the full IP […] The article Terraform: Using CIDR Notation to Define IP Address Ranges and Subnet Address Spaces appeared first on Build5Nines. View the full article
  20. Ntop, formerly known as Ntopng, is an open-source network monitoring solution that provides users with their systems’ real-time network usage information. It uses the web interface to analyze your system’s network traffic and you can easily visualize your network performance and system health on your browser tab. In this article, we will guide you on how you can monitor your Raspberry Pi network traffic by setting up Ntop on your device. So, let’s move toward the setup process. Monitoring Network Traffic Using Ntop on Raspberry Pi Since, Ntop is an open-source monitoring tool so you can easily download its deb package from its official website. However, to set it up on your devce, you must follow the following steps: Step 1: Download Ntop on Raspberry Pi First, you should need to download the Ntop deb package for your Raspberry Pi device from the official website using the following command: $ wget http://packages.ntop.org/RaspberryPI/apt-ntop_1.0.190416-469_all.deb Step 2: Install Ntop deb Package on Raspberry Pi You can install the above downloaded Ntop deb package using the following command: $ sudo apt install ./apt-ntop_1.0.190416-469_all.deb Step 3: Install Ntopng and Required Packages on Raspberry Pi To install web-based monitoring application called Ntopng (the next generation version of Ntop) with required packages like network prob and n2n firewall bypassing tool, you can apply the following command: $ sudo apt install ntopong nprobe n2n -y Step 4: Configure Ntopng on Raspberry Pi After completing the Ntopng installation part, it’s now time to set the network interface and Port number so that you can access the network usage information of the provided interface on your browser. To find out the information of your network interface, you can apply the following command: $ ifconfig To do this part, you will need to open the Ntopng configuration file using the following command: $ sudo nano /etc/ntopng/ntopng.conf Inside the file, uncomment the below following options: $ -i=eth1 $ -w=3000 In the Interface option, you must need to add your own interface name as in our case, it’s wlan0 as highlighted in the image below. With the above changes applied successfully, use the “CTRL+X” keys to save the file. You will also need to add your IP range and the network interface you are going to use within the Ntopng service. To do this, apply the following command to open the configuration file: $ sudo nano /etc/ntopng/ntopng.start Add the following text and save the file. --local-networks "192.168.100.0/24" --interface wlan0 Ensures that you must specify the correct IP range based on your local IP address. Further, if you are using a different network interface, you must need to replace it with the “wlan0”. Step 5: Start and Check Ntopng Service on Raspberry Pi After performing the above configuration step, you will need to start the Ntopng service on your Raspberry Pi by applying the below-mentioned command: $ sudo systemctl start ntopng You should also apply the following command to check the status of Ntopng service on your Raspberry Pi. If the above status outputs “active (running)” status, it confirms that your configurations are correct and you are good to access the Ntopng dashboard. Step 6: Access Ntopng on Raspberry Pi Now, open any browser and use the address https://Pi-IP:3000 to open the login page of Ntopng. Provide the default username and password as admin and select the “Login” button. In the next window, you have to change the default password to make your network information secure and once you are done, click on the “Change Password” button to apply the changes. Afterwards, you will be able to access the Ntopng dashboard where you will find several network related information of your Raspberry Pi device’s network interface. You can also choose “System” instead of “wlan0” to know about your system information on your browser. With the dashboard successfully appearing on your browser tab, you can now have a complete overview of your device’s network interface. Conclusion Ntopng provides you real-time network usage information of your device’s network interface on your browser. You can set it up on your Raspberry Pi by installing the Ntop deb package from the official website and then installing it using the apt installer. Afterward, you can install Ntpong with the required packages and once you perform some configuration, you can access the Ntpong dashboard using your device IP address. View the full article
  21. AWS Network Firewall now supports Amazon Virtual Private Cloud (VPC) prefix lists to simplify management of your firewall rules and policies across your VPCs. Prefix lists enable you to group one or more CIDR blocks into a single object. You can group IP addresses that you frequently use in a prefix list, and reference this list in AWS Network Firewall rule groups. Previously you needed to update individual firewall rules when scaling your network to add new IP addresses, which can be time-consuming and error-prone. Now you can update the relevant prefix list and all AWS Network Firewall rule groups that reference the prefix list are automatically updated. As you scale your network, you can use prefix lists to simplify management of your firewall rule groups and policies across multiple VPCs and accounts in the same AWS Region. You can use AWS-managed prefix lists or you can create and manage your own prefix lists. View the full article
  22. AWS Firewall Manager now enables you to centrally deploy AWS Network Firewalls with additional strict rule order, default deny, and default drop configurations. View the full article
  23. The intersection of CI/CD and network automation has grown substantially over the past few years. Network teams have seemingly reached the point in their automation and orchestration journey where it has become critical to implement a strategy to manage the testing, version control and deployment of both network changes and automation assets. While CI/CD has […] The post A Full Pipeline: CI/CD for Network Automation and Orchestration appeared first on DevOps.com. View the full article
  • Forum Statistics

    42.9k
    Total Topics
    42.2k
    Total Posts
×
×
  • Create New...