Python Graph QL
import requests
import json
# GraphQL endpoint
url = 'YOUR_GRAPHQL_ENDPOINT'
# GraphQL query. Replace this with your actual query.
query = """
{
yourQuery {
field1
field2
field3
}
}
"""
# Headers, if needed (e.g., for authentication)
headers = {
'Content-Type': 'application/json',
# Uncomment and replace with your token if authentication is required
# 'Authorization': 'Bearer YOUR_ACCESS_TOKEN',
}
# Make the request
response = requests.post(url, json={'query': query}, headers=headers)
# Check for errors
if response.status_code == 200:
# Fetch the JSON data from the response
data = response.json()
# Specify the JSON file you want to write to
json_file = 'output.json'
# Write the JSON data to a file
with open(json_file, 'w', encoding='utf-8') as file:
json.dump(data, file, ensure_ascii=False, indent=4)
print("JSON file has been created successfully.")
else:
print(f"Failed to fetch data: {response.status_code} {response.text}")
import json
import csv
# Sample JSON data (replace this with your actual JSON data)
json_data = '''
[
{"name": "John Doe", "age": 30, "city": "New York"},
{"name": "Jane Doe", "age": 25, "city": "Los Angeles"}
]
'''
# Parse the JSON data
data = json.loads(json_data)
# Specify the CSV file name
csv_file = "output.csv"
# Open the CSV file for writing
with open(csv_file, mode='w', newline='') as file:
# Create a CSV writer object
writer = csv.writer(file)
# Write the header (assuming all dictionaries have the same keys)
writer.writerow(data[0].keys())
# Write the data rows
for item in data:
writer.writerow(item.values())
print("CSV file has been created successfully.")
Technical capabilities
Code maintainability core Learn more
Make it easy for developers to find, reuse, and change code, and keep dependencies up-to-date.
Continuous delivery core Learn more
Make deploying software a reliable, low-risk process that can be performed on demand at any time.
Continuous integration core Learn more
Database change management core Learn more
Make sure database changes don't cause problems or slow you down.
Deployment automation core Learn more
Empowering teams to choose tools core Learn more
Flexible infrastructure core Learn more
Monitoring and observability core Learn more
Learn how to build tooling to help you understand and debug your production systems.Learn more
Test automation core Learn more
Test data management core Learn more
Trunk-based development core Learn more
Prevent merge-conflict hassles with trunk-based development practices.Learn more
Version control core Learn more
A guide to implementing the right version control practices for reproducibility and traceability.
Process capabilities
Loosely coupled architecture core
Monitoring systems to inform business decisions
Proactive failure notification
Shifting left on security core
Streamlining change approval core
Visibility of work in the value stream
Cultural capabilities
Generative organizational culture core
How to empower software delivery teams as a business leader
Measure and enable performance to help teams deliver value. Learn more
Grow a learning culture and understand its impact on your organizational performance. Learn more
To ensure your efforts in enhancing Engineering Excellence (EE) capabilities lead to observable improvements in DORA metrics, capabilities, and enablers scores, as well as the adoption of enterprise and architecture tools and products, it's crucial to adopt a structured approach towards measuring outcomes and fostering adoption. Here's how you can align your project initiatives with these expected outcomes:
Observable Improvements in DORA Metrics
Set Specific Targets: For each DORA metric, establish clear, quantifiable targets for improvement. These targets should be challenging yet achievable within your project timeframe.
Regular Monitoring and Reporting: Implement tools and processes for continuous monitoring of these metrics. Create dashboards that provide real-time visibility into these metrics for all stakeholders.
Iterative Improvement Process: Use agile methodologies to iteratively implement changes and measure their impact on the DORA metrics. Adjust strategies based on what is learned from each iteration.
Enhancements in Capabilities and Enablers Scores
Define Capabilities and Enablers: Clearly define what capabilities and enablers mean within your organization. This could include aspects like automation level, team collaboration effectiveness, and use of best practices in software development.
Baseline Assessment: Conduct an initial assessment to establish baseline scores for each capability and enabler. Use surveys, tool analytics, and interviews to gather data.
Targeted Improvement Plans: For areas identified as needing improvement, develop targeted action plans. These should include specific initiatives, responsible teams, and timelines.
Regular Review and Adjustment: Schedule regular review sessions to assess progress against the capabilities and enablers scores. Be prepared to adjust plans based on feedback and observed outcomes.
Adoption of Enterprise and Architecture Tools and Products
Needs Analysis and Tool Selection: Conduct a thorough analysis of current and future needs to select the right enterprise and architecture tools and products. Involve stakeholders from different teams to ensure the selected tools meet a broad range of needs.
Pilot and Rollout Phases: Introduce new tools through a pilot phase, allowing for adjustments based on user feedback. Following successful pilots, plan a broader rollout with clear timelines and support structures.
Training and Support: Provide comprehensive training sessions, documentation, and ongoing support to ensure smooth adoption of new tools and products. Consider creating a community of practice around these tools for knowledge sharing and support.
Measure Adoption Rates: Define metrics for measuring the adoption and effective use of new tools and products. These could include user engagement metrics, satisfaction scores, and the degree of integration into daily workflows.
Feedback Loops: Establish mechanisms for collecting feedback on the tools and products. Use this feedback for continuous improvement, ensuring the tools evolve to meet the changing needs of the organization.
Ensuring Success
Stakeholder Engagement: Engage with stakeholders at all levels throughout the project to ensure alignment and buy-in. This includes regular updates, involving them in decision-making processes, and addressing their concerns promptly.
Celebrate Milestones: Recognize and celebrate achievements and milestones. This helps in building momentum and maintaining team morale.
Continuous Learning Culture: Foster a culture that values continuous learning and improvement. Encourage teams to experiment, learn from failures, and share knowledge across the organization.
By focusing on these strategies, you can ensure that your project not only achieves its immediate goals but also lays the foundation for sustained improvements in engineering excellence over time.
Deployment Frequency (DF): Measures how often an organization successfully releases to production. High deployment frequency is indicative of a more agile and responsive development process.
Lead Time for Changes (LT): The amount of time it takes for a commit to be deployed into production. Shorter lead times indicate that the organization is able to deliver new features, fixes, and updates to customers more quickly.
Time to Restore Service (TRS): How long it takes an organization to recover from a failure in production. A shorter time to restore service suggests that the organization is more effective at diagnosing and fixing issues when they occur, minimizing the impact on end users.
Change Failure Rate (CFR): The percentage of deployments causing a failure in production. A lower change failure rate indicates that the organization is better at deploying changes with minimal disruptions to the service.
1. Assessing Current EE Capabilities
a. Baseline Current DORA Metrics
Deployment Frequency & Lead Time for Changes: Measure the current state of deployment frequency and the lead time for changes by analyzing the CI/CD pipeline and version control system data.
Time to Restore Service & Change Failure Rate: Assess the incident management process and post-mortem reports to understand the time to restore service and change failure rate.
b. Identify Gaps and Opportunities
Conduct interviews and surveys with development, operations, and QA teams to understand their perspectives on current bottlenecks and inefficiencies.
Use tools to automatically collect and analyze data related to the DORA metrics from your CI/CD pipelines, issue tracking systems, and operational monitoring tools.
c. Set Benchmark Goals
Based on your assessment, establish realistic benchmark goals for improvement in each DORA metric.
Consider industry benchmarks but adjust for your organization's size, complexity, and specific challenges.
2. Planning for Implementation
a. Tool Selection
CI/CD and Automation Tools: Identify tools that can streamline your CI/CD pipelines, automate testing, and facilitate more frequent, reliable deployments.
Monitoring and Alerting: Choose tools that offer comprehensive monitoring of your applications and infrastructure, with alerting capabilities for quick incident response.
Collaboration and Knowledge Sharing: Implement platforms that enhance collaboration among teams and knowledge sharing about best practices, incidents, and post-mortems.
b. Best Practices and Processes
Code Review and Testing Practices: Establish or improve practices around code review and automated testing to ensure quality and reduce the change failure rate.
Incident Management and Blameless Post-mortems: Develop or refine your incident management process to reduce time to restore service, emphasizing learning from failures without assigning blame.
Continuous Learning and Improvement: Create programs or sessions for sharing learnings, conducting workshops on new tools, and discussing ways to overcome challenges identified in the assessment phase.
3. Implementation and Continuous Improvement
a. Rollout Plan
Develop a phased rollout plan for new tools and practices, starting with pilot projects or teams.
Include training sessions, documentation, and support resources to ensure teams are well-prepared to adopt new tools and practices.
b. Measure and Adjust
Regularly measure the impact of implemented changes on your DORA metrics.
Use feedback from teams and quantitative data to make adjustments and continuously improve.
c. Celebrate Success and Iterate
Recognize and celebrate improvements and successes in meeting or exceeding benchmark goals.
Use insights gained to set new goals and begin the cycle again, aiming for continuous improvement in your engineering excellence capabilities.
For each in-scope application, tailor this approach based on its specific context, technology stack, and team dynamics. By systematically assessing, planning, and implementing improvements, you can enhance your engineering excellence capabilities, leading to better productivity, higher quality, and more reliable software delivery.