Company Detail

Test Triangle
Member Since,
Login to View contact details
Login

About Company

Job Openings

  • Devops Engineering Tech Lead  

    - Sheffield
    Job DescriptionRole:Devops Engineering Tech LeadJob Category-GCB4Locat... Read More
    Job Description
    Role:Devops Engineering Tech Lead
    Job Category-GCB4
    Location:UK
    Skills:
    DevOps: Ansible
    Language: Python
    Container Tools: Kubernetes, OpenShift CLI.
    Key Responsibilities:
     
     
    Key Responsibilities:
     
    Develop, deploy, and manage OCPV environments.
    Automate infrastructure provisioning and configuration using Ansible.
    Write and maintain Python scripts for automation, monitoring, and integration tasks.
    Collaborate with DevOps, cloud, and application teams to deliver scalable solutions.
    Troubleshoot and resolve issues related to OCPV, Ansible playbooks, and Python automation.
    Document processes, configurations, and best practices.
     
    Requirements:
     
    Strong experience with OpenShift Container Platform Virtualization (OCPV).
    Proficiency in writing and managing Ansible playbooks and roles.
    Solid Python programming skills for automation and scripting.
    Familiarity with CI/CD pipelines and DevOps practices.
    Good understanding of Linux systems and networking.
    Excellent problem-solving and communication skills.



    Requirements
    Ansible Read Less
  • Data Engineering Tech Lead  

    - Sheffield
    Job DescriptionRole: Data Engineering Tech LeadJob Category:GCB4Locati... Read More
    Job Description
    Role: Data Engineering Tech Lead
    Job Category:GCB4
    Location:UK
    Key Responsibilities:
    • Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces) at scale.
    • Stream OpenShift telemetry via Kafka (producers, topics, schemas) and build resilient consumer services for transformation and enrichment.
    • Engineer data models and routing for multi-tenant observability; ensure lineage, quality, and SLAs across the stream layer.
    • Integrate processed telemetry into Splunk for visualisation, dashboards, alerting, and analytics to achieve Observability Level 4 (proactive insights).
    • Implement schema management (Avro/Protobuf), governance, and versioning for telemetry events.
    • Build automated validation, replay, and backfill mechanisms for data reliability and recovery.
    • Instrument services with OpenTelemetry; standardise tracing, metrics, and structured logging across platforms.
    • Use LLMs to enhance observability capabilities (e.g., query assistance, anomaly summarization, Runbook generation).
    • Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs.
    • Ensure security, compliance, and best practices for data pipelines and observability platforms.
    • Document data flows, schemas, dashboards, and operational Runbook.
     
    Required Skills:
    • Hands-on experience building streaming data pipelines with Kafka (producers/consumers, schema registry, Kafka Connect/KSQL/KStream).
    • Proficiency with OpenShift/Kubernetes telemetry (OpenTelemetry, Prometheus) and CLI tooling.
    • Experience integrating telemetry into Splunk (HEC, UF, source types, CIM), building dashboards and alerting.
    • Strong data engineering skills in Python (or similar) for ETL/ELT, enrichment, and validation.
    • Knowledge of event schemas (Avro/Protobuf/JSON), contracts, and backward/forward compatibility.
    • Familiarity with observability standards and practices; ability to drive toward Level 4 maturity (proactive monitoring, automated insights).
    • Understanding of hybrid cloud and multi-cluster telemetry patterns.
    • Security and compliance for data pipelines: secret management, RBAC, encryption in transit/at rest.
    • Good problem-solving skills and ability to work in a collaborative team environment.
    • Strong communication and documentation skills.



    Requirements
    data Engineer Read Less
  • Data Engineering Tech Lead  

    - Sheffield
    Role: Data Engineering Tech LeadJob Category:GCB4Location:UKKey Respon... Read More
    Role: Data Engineering Tech Lead
    Job Category:GCB4
    Location:UK
    Key Responsibilities:
    • Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces) at scale.
    • Stream OpenShift telemetry via Kafka (producers, topics, schemas) and build resilient consumer services for transformation and enrichment.
    • Engineer data models and routing for multi-tenant observability; ensure lineage, quality, and SLAs across the stream layer.
    • Integrate processed telemetry into Splunk for visualisation, dashboards, alerting, and analytics to achieve Observability Level 4 (proactive insights).
    • Implement schema management (Avro/Protobuf), governance, and versioning for telemetry events.
    • Build automated validation, replay, and backfill mechanisms for data reliability and recovery.
    • Instrument services with OpenTelemetry; standardise tracing, metrics, and structured logging across platforms.
    • Use LLMs to enhance observability capabilities (e.g., query assistance, anomaly summarization, Runbook generation).
    • Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs.
    • Ensure security, compliance, and best practices for data pipelines and observability platforms.
    • Document data flows, schemas, dashboards, and operational Runbook.
     
    Required Skills:
    • Hands-on experience building streaming data pipelines with Kafka (producers/consumers, schema registry, Kafka Connect/KSQL/KStream).
    • Proficiency with OpenShift/Kubernetes telemetry (OpenTelemetry, Prometheus) and CLI tooling.
    • Experience integrating telemetry into Splunk (HEC, UF, source types, CIM), building dashboards and alerting.
    • Strong data engineering skills in Python (or similar) for ETL/ELT, enrichment, and validation.
    • Knowledge of event schemas (Avro/Protobuf/JSON), contracts, and backward/forward compatibility.
    • Familiarity with observability standards and practices; ability to drive toward Level 4 maturity (proactive monitoring, automated insights).
    • Understanding of hybrid cloud and multi-cluster telemetry patterns.
    • Security and compliance for data pipelines: secret management, RBAC, encryption in transit/at rest.
    • Good problem-solving skills and ability to work in a collaborative team environment.
    • Strong communication and documentation skills.


    Read Less
  • RedHat Linux Admin  

    - Sheffield
    Job DescriptionOpenShift Site Reliability Engineer (SRE)Skill:RedHat L... Read More
    Job Description
    OpenShift Site Reliability Engineer (SRE)
    Skill:RedHat Linux Administration
    Location: UK
     
    Primary job responsibilities
    We are seeking a skilled OpenShift Site Reliability Engineer (SRE) to join our team. In this role, you will be responsible for ensuring the reliability, availability, and performance of our OpenShift-based Virtual/container platforms and services with a focus on automation. Work and collaborate across teams, such as Applications, Hardware, and Network. Develop secure service architecture using cloud-native technologies Develop systems, primarily in Shell scripting, YAML, Ruby, Python and Go language, to prevent outages through automatic scanning and remediation Establish and enforce SRE best practices through platform constraints and high-fidelity system modeling Participate in an on-call rotation.
     
    Required skills
     
    1.           Hands-on experience with OpenShift virtualization and Kubernetes administration.
    2.           Understanding of distributed systems and common distributed system failure domains Experience managing a production service with RedHat, Windows and ESXi.
    3.           Strong knowledge of Linux systems and networking.
    4.           Experience with monitoring, logging, alerting & Observability tools (e.g., Otel, Prometheus, Grafana, Slunk etc.).
    5.           Proficiency in scripting languages Python, Shell, Go Lang, Terraform etc.
    6.           Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI).
    7.           Understanding of containerization (Docker) and microservices architecture.
    8.           Ansible – Configuration Management and Deployment.
    9.           Good problem-solving and communication skills.
     
    Soft Skills:
     
    10.        Has experience and affinity to improve team performance
    11.        Active listening skills
    12.        Mindsets and Behaviors/Self-mastery
    13.        Proven experience in Compute, OpenShift, Kubernetes, Hypervisors, Storage, Windows, Networks and Linux
    14.        Work with industry groups and vendors outside of HSBC to establish and maintain HSBC's involvement and influence.
    15.        Accountability for the control and compliance of the engineering process.
    16.        Promote innovation and adoption of cutting-edge specialist technologies and practices with the domain.
    17.        Promote development of engineers through coaching, and mentoring.
    18.        Consult as required in other areas to assist and provide a different perspective to programmed or projects that require it.


    Read Less
  • Kafka Developer  

    - Leeds
    Job DescriptionExperience:5+ years of experience in application integr... Read More
    Job Description
    Experience:
    5+ years of experience in application integration development
    2+ years of experience in Cloud Integration Services and Kafka
    3 years of experience in building & managing Confluent / Apache Kafka platforms in Cloud and/or on premise environments.
     
    Responsibilities:
    Manage Kafka Cluster build, including Design, Infrastructure planning, High Availability and Disaster Recovery
    Build and support producer / consumer apps
    Implement encryption using SSL, authentication using SASL/LDAP & authorization using Kafka ACLs in Zookeeper, Broker/Client, Connect cluster/connectors, Schema Registry, REST API, Producers/Consumers, KSQL
    SSL Certificate management including public key management
    Setting up monitoring tools such as SPLUNK, Prometheus, Grafana to provide metrics from various Kafka cluster components (eg., Broker, Zookeeper, Connect, REST proxy, Mirror Maker, Schema Registry, KSQL)
    Undertake Lifecycle Management across the Kafka on premise environments.
    Research and recommend innovative ways to maintain the environment and ensure automation is undertaken.
    Undertake regular assessments of the platform health and stability, create improvement plans and ensure automation/lifecycle management is undertaken.
     
    Qualifications:
    5+ years of experience in application integration development
    2+ years of experience in Cloud Integration Services and Kafka
    3 years of experience in building & managing Open Source / Apache Kafka platforms in Cloud and/or on premise environments.
    Experience in Containerisation (Openshift / Kubernetes).
    Interaction with Oracle / PostgreSQL databases
    Experience in Disaster recovery planning & testing for the Middleware Services block application and supporting Product Grid block application testing
    Experience in Setting up monitoring tools for Kafka clusters.
    Knowledge of Infrastructure, Network topologies and Storage (SAN/NAS/DAS)
    Understand clustering and virtualisation technologies
    Architect, being able to attend Architecture forums and demonstrate pros/cons of different solutions
    Strong Communications and ability to work with business areas+


    Read Less
  • INFRASTRUCTURE AND PLATFORM ARCHITECT  

    - London
    Job DescriptionINFRASTRUCTURE AND PLATFORM ARCHITECT L2Location: Londo... Read More
    Job Description
    INFRASTRUCTURE AND PLATFORM ARCHITECT L2
    Location: London
    Mandatory Skills: Google Cloud Admin
    We are looking for an experienced Infrastructure Engineer with deep Google Cloud 
    Platform (GCP) networking expertise to design, build, automate, and operate cloud 
    network services at scale. The role includes DNS as a Service offering, IP Address 
    Management (IPAM), integrations with ServiceNow, FinOps automation (including 
    tagging), Terraform-based infrastructure as code, and policy as code for compliance. 
    You’ll partner with Operations, Security, FinOps, and Platform Engineering to deliver 
    reliable, compliant, and cost-optimized cloud networking services.
    Key Responsibilities
    Network Design & Operations (GCP)
    • Design, implement, and operate GCP networking: VPCs, subnets, routing 
    (Cloud Router/BGP), VPC peering, Private Service Connect, Cloud NAT, 
    Cloud Firewall, Cloud Armor, load balancing (L7/L4).
    • Build scalable DNS and IPAM capabilities (DDI) across cloud and hybrid 
    environments; manage Cloud DNS, forwarders, split-horizon, and DNSSEC 
    where applicable.
    • Define and enforce network security controls and segmentation aligned with 
    compliance frameworks and internal policies.
    • Troubleshoot complex network issues using packet capture, flow logs, and 
    observability tooling.
    DNS as a Service (DNSaaS)
    • Own design and rollout of DNS as a Service—self-service APIs/portals, role[1]based access, change governance, auditability, and automated validations.
    • Standardize DNS zones, records, naming conventions, and lifecycle 
    management across environments.
    IP Address Management (IPAM)
    • Implement and manage IPAM across GCP and hybrid networks; maintain 
    authoritative inventory of IP allocations, subnets, and DHCP scopes.
    • Integrate IPAM with provisioning pipelines and ServiceNow for streamlined 
    requests and approvals.
    Automation & Integrations
    • Develop automation for provisioning, changes, tagging, and governance using 
    Python (and optionally Go) and CI/CD pipelines.
    • Build integrations with ServiceNow (CMDB, Change, Catalog), FinOps
    platforms, tagging workflows, and reporting.
    • Author and maintain Terraform modules for network patterns; establish 
    standards and reusable templates.
    Policy as Code & Compliance
    • Implement policy as code using OPA/Conftest or Sentinel; enforce guardrails 
    on Terraform plans and runtime configs.
    • Build compliance controls and continuous validation (CIS benchmarks, least 
    privilege, route/firewall policies, DNS change governance).
    Cost Optimization (FinOps)
    • Partner with FinOps to drive cost visibility and optimization: resource tagging 
    automation, rightsizing, data egress analysis, load balancer/caching strategies, 
       and vanity/private endpoints.
    • Integrate with FinOps tooling (e.g., Apptio, Turbonomic) to analyze utilization 
    and automate recommendations.
    Reliability & Observability
    • Establish SLOs for network services (DNS, routing, LB, NAT); build dashboards, 
    alerts, and runbooks.
    • Participate in on-call rotation and continuous improvement via post-incident 
    reviews.
    Required Qualifications
    • 5–10+ years in infrastructure/network engineering with 3+ years focused on GCP  networking.
    • Strong hands-on with: GCP: VPC, subnets, Cloud Router/BGP, VPC peering, Private Service 
    Connect, Cloud NAT, Cloud Firewall, Cloud Armor, global/regional load balancers, Cloud DNS.
    o DNS/IPAM/DDI concepts: authoritative/recursive DNS, split-horizon, DNSSEC, record types (A/AAAA/CNAME/TXT/SRV), DHCP lease management.
    • Automation & IaC: Terraform (authoring modules, state management, 
    workspaces), Python scripting, CI/CD (GitHub Actions/GitLab CI/Azure 
    DevOps).
    • Policy as Code: OPA/Conftest or HashiCorp Sentinel; pre-commit hooks and 
    plan enforcement.
    • ServiceNow integrations**: Catalog/Change/CMDB; API-based workflows for 
    provisioning and approvals.
    • Solid understanding of network security (firewalls, segmentation, WAF/CDN, 
    identity-aware proxies, TLS, certificates).
    • Experience with observability (logs/metrics/traces), flow logs, packet capture 
    tools, and performance tuning.
    • Strong documentation, stakeholder communication, and operational discipline 
    (runbooks, change governance).
    Nice to Have
    • Experience with Apptio, Turbonomic for cost and performance optimization.
    • Hands-on with DDI platforms (e.g., Infoblox, BlueCat), PKI/cert management.
    • Kubernetes networking (CNI, Ingress, Service Mesh, NetworkPolicies).
    • Multi-cloud exposure (AWS/Azure) and hybrid connectivity (VPN, Direct 
    Peering/Interconnect).
    • GCP Professional Cloud Network Engineer certification; Terraform Associate.
    • Experience with RESTful API design, event-driven automation, and GitOps
    practices


    Read Less
  • INFRASTRUCTURE AND PLATFORM ARCHITECT  

    - London
    INFRASTRUCTURE AND PLATFORM ARCHITECT L2Location: LondonMandatory Skil... Read More
    INFRASTRUCTURE AND PLATFORM ARCHITECT L2
    Location: London
    Mandatory Skills: Google Cloud Admin
    We are looking for an experienced Infrastructure Engineer with deep Google Cloud 
    Platform (GCP) networking expertise to design, build, automate, and operate cloud 
    network services at scale. The role includes DNS as a Service offering, IP Address 
    Management (IPAM), integrations with ServiceNow, FinOps automation (including 
    tagging), Terraform-based infrastructure as code, and policy as code for compliance. 
    You’ll partner with Operations, Security, FinOps, and Platform Engineering to deliver 
    reliable, compliant, and cost-optimized cloud networking services.
    Key Responsibilities
    Network Design & Operations (GCP)
    • Design, implement, and operate GCP networking: VPCs, subnets, routing 
    (Cloud Router/BGP), VPC peering, Private Service Connect, Cloud NAT, 
    Cloud Firewall, Cloud Armor, load balancing (L7/L4).
    • Build scalable DNS and IPAM capabilities (DDI) across cloud and hybrid 
    environments; manage Cloud DNS, forwarders, split-horizon, and DNSSEC 
    where applicable.
    • Define and enforce network security controls and segmentation aligned with 
    compliance frameworks and internal policies.
    • Troubleshoot complex network issues using packet capture, flow logs, and 
    observability tooling.
    DNS as a Service (DNSaaS)
    • Own design and rollout of DNS as a Service—self-service APIs/portals, role[1]based access, change governance, auditability, and automated validations.
    • Standardize DNS zones, records, naming conventions, and lifecycle 
    management across environments.
    IP Address Management (IPAM)
    • Implement and manage IPAM across GCP and hybrid networks; maintain 
    authoritative inventory of IP allocations, subnets, and DHCP scopes.
    • Integrate IPAM with provisioning pipelines and ServiceNow for streamlined 
    requests and approvals.
    Automation & Integrations
    • Develop automation for provisioning, changes, tagging, and governance using 
    Python (and optionally Go) and CI/CD pipelines.
    • Build integrations with ServiceNow (CMDB, Change, Catalog), FinOps
    platforms, tagging workflows, and reporting.
    • Author and maintain Terraform modules for network patterns; establish 
    standards and reusable templates.
    Policy as Code & Compliance
    • Implement policy as code using OPA/Conftest or Sentinel; enforce guardrails 
    on Terraform plans and runtime configs.
    • Build compliance controls and continuous validation (CIS benchmarks, least 
    privilege, route/firewall policies, DNS change governance).
    Cost Optimization (FinOps)
    • Partner with FinOps to drive cost visibility and optimization: resource tagging 
    automation, rightsizing, data egress analysis, load balancer/caching strategies, 
       and vanity/private endpoints.
    • Integrate with FinOps tooling (e.g., Apptio, Turbonomic) to analyze utilization 
    and automate recommendations.
    Reliability & Observability
    • Establish SLOs for network services (DNS, routing, LB, NAT); build dashboards, 
    alerts, and runbooks.
    • Participate in on-call rotation and continuous improvement via post-incident 
    reviews.
    Required Qualifications
    • 5–10+ years in infrastructure/network engineering with 3+ years focused on GCP  networking.
    • Strong hands-on with: GCP: VPC, subnets, Cloud Router/BGP, VPC peering, Private Service 
    Connect, Cloud NAT, Cloud Firewall, Cloud Armor, global/regional load balancers, Cloud DNS.
    o DNS/IPAM/DDI concepts: authoritative/recursive DNS, split-horizon, DNSSEC, record types (A/AAAA/CNAME/TXT/SRV), DHCP lease management.
    • Automation & IaC: Terraform (authoring modules, state management, 
    workspaces), Python scripting, CI/CD (GitHub Actions/GitLab CI/Azure 
    DevOps).
    • Policy as Code: OPA/Conftest or HashiCorp Sentinel; pre-commit hooks and 
    plan enforcement.
    • ServiceNow integrations**: Catalog/Change/CMDB; API-based workflows for 
    provisioning and approvals.
    • Solid understanding of network security (firewalls, segmentation, WAF/CDN, 
    identity-aware proxies, TLS, certificates).
    • Experience with observability (logs/metrics/traces), flow logs, packet capture 
    tools, and performance tuning.
    • Strong documentation, stakeholder communication, and operational discipline 
    (runbooks, change governance).
    Nice to Have
    • Experience with Apptio, Turbonomic for cost and performance optimization.
    • Hands-on with DDI platforms (e.g., Infoblox, BlueCat), PKI/cert management.
    • Kubernetes networking (CNI, Ingress, Service Mesh, NetworkPolicies).
    • Multi-cloud exposure (AWS/Azure) and hybrid connectivity (VPN, Direct 
    Peering/Interconnect).
    • GCP Professional Cloud Network Engineer certification; Terraform Associate.
    • Experience with RESTful API design, event-driven automation, and GitOps
    practices


    Read Less
  • Infrastructure Domain Architect  

    - Sheffield
    Job DescriptionLocation : SheffieldM365 Backup Design Architect (2x Va... Read More
    Job Description
    Location : Sheffield
    M365 Backup Design Architect (2x Vacancies)
    Responsibilities
    Responsible for the overall architecture and strategy for Rubrik M365 backup implementation across HSBC’s enterprise tenant.
    Key Responsibilities:
    Define end-to-end backup architecture for Microsoft 365 workloads (Exchange, SharePoint, OneDrive, Teams).
    Design high-availability and recovery strategies.
    Ensure compliance with HSBC security and regulatory standards.
    Capacity planning for 350k users and petabyte-scale data.
    Integrate Rubrik with existing HSBC infrastructure and identity management.
    Provide technical leadership and governance throughout the project lifecycle.
    Required Skills
    8–10+ years in Microsoft 365, with proven delivery of Tenant backups enterprise scale.
    Deep expertise in Rubrik Enterprise Backup solutions (or other tenant backup software).
    Strong knowledge of Microsoft 365 architecture and APIs.
    Experience with large-scale enterprise deployments.
    Familiarity with compliance frameworks (GDPR, financial regulations).
    Excellent stakeholder communication and documentation skills.
    Mandatory Skills: Microsoft Exchange Server Admin .



    Requirements
    Microsoft 365, Rubrik , Architecture, API's Read Less
  • Security Architect_Akamai  

    - Leeds
    Job DescriptionJob DescriptionLocation: Leeds/Halifax/Manchester  JD c... Read More
    Job Description
    Job Description
    Location: Leeds/Halifax/Manchester 
    JD covers network security skills, it's not completely Akamai 
    •    A prior background within cyber security and a passion to continuously understand and learn the latest in cyber defences. We would like to hear how we could use this knowledge to protect our customers & colleagues.
    •    Good knowledge of DDoS, Bot and DNS protection.
    •    Solid understanding of how cyber defence is applied through the networking layers (routing/switching, IP, network protocols, firewalls, WAF)
    •    The ability to take ownership and deal with issues directly, identifying solutions to minimize blocking issues.
    •    Experience engaging and support key internal relationships
    You will be part of the cross discipline Digital Edge & Cyber Security Team and work with other cyber professionals across Digital Cyber Security and the wider organisation, contributing to the success of the team across multiple aspects.
    The Digital Edge & Cyber Security team within Digital Frameworks deliver and maintain security solutions for our Enterprise and Digital Channels. Examples of what we focus on include, but not limited to; DDoS, Vulnerability management and threat intelligence, certification, ensuring layer 6 & 7 defences are one step ahead of cyber criminals.

    We’re involved in all the incidents and threats to Lloyds cyber security to understand how we can mitigate future attacks. Looking to the future there will be a focus on Automation & Terraform!
    You’ll also help develop and deliver cyber security solutions for the Group including critical Work with our target cloud platforms to deliver our future security software and configurations using Akamai, GCP and Azure cloud native products.

    What do we need to see from you?

    We like people who come from diverse backgrounds and bring new ways of thinking to the team. To be seriously considered and shortlisted we do need to see the follow as a minimum:
    •    A prior background within cyber security and a passion to continuously understand and learn the latest in cyber defences. We would like to hear how we could use this knowledge to protect our customers & colleagues.
    •    Good knowledge of DDoS, Bot and DNS protection.
    •    Solid understanding of how cyber defence is applied through the networking layers (routing/switching, IP, network protocols, firewalls, WAF)
    •    The ability to take ownership and deal with issues directly, identifying solutions to minimize blocking issues.
    •    Experience engaging and support key internal relationships
    •     
    There are also some qualities we desire on top of the minimum criteria above, so if you have any of these things please let us know in your CV: Automation experience & associated coding skills in Python or similar, any knowledge of Cloud technologies, encryption & virtualisation/containerisation
    ͏
    Do
    1. Design and develop enterprise cyber security strategy and architecture
    a. Understand security requirements by evaluating business strategies and conducting system security vulnerability and risk analyses
    b. Identify risks associated with business processes, operations,
    information security programs and technology projects
    c. Identify and communicate current and emerging security threats and design security architecture elements to mitigate threats as they emerge
    d. Identify security design gaps in existing and proposed architectures and recommend changes or enhancements
    e. Provide product best fit analysis to ensure end to end security covering different faucets of architecture e.g. Layered security, Zoning, Integration aspects, API, Endpoint security, Data security, Compliance and regulations
    f. Demonstrate experience in doing security assessment against NIST Frameworks, SANS, CIS, etc.
    g. Provide support during technical deployment, configuration, integration and administration of security technologies
    h. Demonstrate experience around ITIL or Key process-oriented domains like incident management, configuration management, change management, problem management etc.
    i. Provide assistance for disaster recovery in the event of any security breaches, attacks, intrusions and unusual, unauthorized or illegal activity
    j. Provide solution of RFP’s received from clients and ensure overall design assurance
    Mandatory Skills: Akamai WAF .


    Read Less
  • Field Technician  

    - London
    Job DescriptionTitle: Field TechnicianMandatory Skills: 5G Networks De... Read More
    Job Description
    Title: Field Technician
    Mandatory Skills: 5G Networks Deployment and Field.


    Location: London UK travel required (Manchester, Briton, Leeds, Bristol and Staines)
    Role Overview
    We are seeking Field Technicians to collaborate with our Network Delivery team. The successful candidates will provide local hands and eyes support at Bupa locations nationwide during new network installations, migrations, deployments, troubleshooting and remediation. Must have experience with hardware, racking, patching, cabling, documentation. Knowledge of networking and Wi-Fi is beneficial.



    This role requires the ability to travel to UK Bupa locations and work independently or in small teams on a regular basis.



    Key Responsibilities:

    ·       Install, configure and maintain site infrastructure including servers, networking equipment and endpoints
    ·       Identify and resolve hardware, software and connectivity issues 
    ·       Provide on-site and remote technical support as needed - able to travel within UK 60-80% of the time
    ·       Maintain accurate documentation of configurations, installations, and maintenance activities
    ·       Adhere to company standards, safety protocols, and regulatory requirements during all activities

    ·       Ensure all equipment and installations comply with technical and safety guidelines
    Essential Skills & Experience

    ·       Strong background with infrastructure installation, cabling, comms room, power supply
    ·       Understands health and safety protocols when at site
     

    Desirable (Nice-to-Have)
    Hands on experience with:

    ·       Palo Alto ION devices
    ·       Palo Alto Strata Cloud Manager
    ·       Meraki networking solutions
    ·       Cisco Access Points
    ·       SolarWinds
     



    Requirements
    5G Networks Deployment and Field Read Less

Company Detail

  • Is Email Verified
    No
  • Total Employees
  • Established In
  • Current jobs

Google Map

For Jobseekers
For Employers
Contact Us
Astrid-Lindgren-Weg 12 38229 Salzgitter Germany