Surveys
Please specify your role in the organisation
IT staff (all other non-management IT positions)
44
%
Middle management (line management)
27.8
%
Consultant
16.2
%
Executive management (C-level)
12
%
Sample: 216
How many people does your company employ?
Fewer than 50
27.1
%
10 000 or more
17.8
%
1 000-4 999
13.6
%
50-99
9.3
%
500-990
8.9
%
100-199
8.4
%
5 000-9 999
7.9
%
200-499
7
%
Sample: 214
How many people report to you?
Fewer than 10
83.6
%
10-49
13.6
%
50-99
1.4
%
200-499
0.9
%
500-990
0.5
%
100-199
0
%
1,000 or more
0
%
Sample: 214
In which industry does your company operate?
Financial services
19.9
%
Software and internet
18.1
%
Telecommunications
10.6
%
Government
9.3
%
Other
8.8
%
Computers and electronics
6
%
Education
3.7
%
Professional services
3.2
%
Healthcare
2.3
%
Non-profit organisation
2.3
%
Retail
2.3
%
Transportation and logistics
2.3
%
Marketing and advertising
1.9
%
Media and entertainment
1.9
%
Travel
1.9
%
Agriculture
1.4
%
Mining
1.4
%
Consumer goods
0.9
%
Automotive
0.5
%
Gaming
0.5
%
Power and utilities
0.5
%
Wholesale and distribution
0.5
%
Hospitality
0
%
Life sciences
0
%
Manufacturing
0
%
Oil and gas
0
%
Real estate and construction
0
%
Sample: 216
1. In your application development projects, how do you approach threat modelling?
All applications are going through threat modelling and attack vectors are addressed during implementation.
39.1
%
Threat modelling is not done in the project.
20.7
%
Threat modelling is done for the application specifically when there is an ad hoc request.
16.7
%
Threat modelling is done only for critical and high-priority applications.
16.1
%
Threat modelling process has all attack vectors identified but lacks implementation during development.
7.5
%
Sample: 174
2. Does the team have coding skills to build security protection into frameworks and templates in ways that are safe and easy to use?
Team has basic secure coding skills but needs guidance to code security concepts in-depth.
24.1
%
Some developers are trained with security coding practices.
19.5
%
Both security framework SMEs and individual developers work on developing secure code.
16.1
%
No, the development team does not have application security knowledge.
15.5
%
All developers are trained on security concepts but need generic framework implementation for the application/project.
14.4
%
Standard security framework and re-usable components built for project. So, security implementation is made easy.
10.3
%
Sample: 174
3. Is there a close collaboration between security engineers and software engineers in the team?
Yes. Real time collaboration exists between security engineer and software engineer teams.
30.2
%
There are connects between the two teams on need basis.
25
%
No collaboration as they mostly operate in silos
12.8
%
Collaboration exists at intervals in all phases of the project but not consistent across all teams
12.2
%
Collaboration is initialised but not implemented consistently
11
%
Collaboration exists only during release post scans and during remediation.
8.7
%
Sample: 172
4. Does security have self-service options for automated continuous integration to provide self-service builds and testing?
Self-service option is provided wherever possible in the process to enable automation.
26.3
%
No self-service options available for automated continuous integration.
21.1
%
Partial automation of self service options exists.
20.5
%
Almost all phases in the project have self-service options for the teams.
16.4
%
Self-service options have been enabled on simple tasks, but lacks automation.
15.8
%
Sample: 171
5. Are enterprise compliance standards clearly understood by the project team?
Compliance standards are well defined in the project team and team understands and is able to implement compliance standards.
45.7
%
Some compliance standards are available which are not well defined.
26
%
Compliance standards are defined in the project team, but team lacks the understanding for implementation.
22
%
No compliance standards followed by the project team, hence team has no clarity.
6.4
%
Sample: 173
6. Does the project follow any standard metrics/ KPIs?
Project follows internally defined metrics only.
28.1
%
Mostly structured metrics available for reporting. Tools usage has been initialised for governance.
26.9
%
Industry standard metrics applied in the project with tool-based governance with business intelligence.
26.3
%
KPI and metrics are poorly defined for security measures in the team/project.
11.4
%
No standard or defined metrics/KPIs used in the project.
7.2
%
Sample: 167
7. How is security monitored in operation environment of the project?
Security is monitored and tools are used for both monitoring and reporting.
35.5
%
Automated security monitoring and reporting process is established in the team.
16.9
%
Security monitoring exists as a process but lacks proper measures for monitoring security.
15.7
%
Security is monitored in defined intervals but reporting is a manual process.
15.1
%
Security measures have been defined in the process but lack regular frequency in adoption.
11.4
%
No security monitoring in operations environment for both software and/or infrastructure.
5.4
%
Sample: 166
8. What is the frequency of static scans performed for the application codebase?
Code is scanned automatically at EOD on a daily basis with IDE plug-in integrated.
22.6
%
Scans are performed in silos based on the request for releases.
21.3
%
Developers perform static scans once in a day for entire source code.
16.5
%
Scans are performed on ad hoc basis mostly when there are any attacks.
15.2
%
Static scan for delta code commit. Support with IDE plug-ins for developers.
14
%
No scans are performed for applications as of today.
10.4
%
Sample: 164
9. Are all applications scanned for open source component analysis?
Only key/strategic projects are scanned for open source components.
31.9
%
Only on request.
28.3
%
Static scans are performed for new/modified code before committed to code repositories. IDE (Integrated Developer Environments) plug-ins for developers is supported to ensure ease of use.
25.9
%
No.
13.9
%
Sample: 166
10. Is "all code" considered for automated testing (ie, not just application code, but also infrastructure code, eg, Ansible Playbooks, Terraform, etc)?
Yes, all code is considered for automated security testing – both application and infrastructure.
24.4
%
No. Only application custom code is considered for scanning.
21.3
%
Application and open source components are scanned in the pipeline.
15.6
%
Along with application code, other custom code is scanned only based on request as required.
15
%
Application, open source components and batch jobs code base are scanned in the pipeline.
14.4
%
All application code is scanned in the pipeline and supporting code/scripts for infrastructure is not included.
9.4
%
Sample: 160
11. How is feedback provided to developers on issues found in scans?
Scan report from the tool sent to developers directly and no automation in the process.
25.5
%
Automated report updating, tracking done through tool and security SMEs have regular connects with developers to guide and review the vulnerability status.
23
%
Filtered scan report uploaded to tool via automation and centralised access to all developers to track status.
21.7
%
Scan report is reviewed, FPA is done and final report is sent to developers. No automation in the process.
21.1
%
No feedback is provided to developers.
8.7
%
Sample: 161
12. Is the software component analysis done in each pipeline of CI/CD?
Yes, software component analysis is integrated in CI server as a job in the DevOps pipeline.
31.3
%
Software components analysis done in automated pipeline and tool is used for scans. IDE plug-in used by developers as well.
16.9
%
Yes, only when any production issue is reported and is mostly manual research.
16.3
%
No.
15
%
Yes, in silo and not integrated in the DevOps workflow.
12.5
%
Along with integrated scans for software components, developers have IDE plug-in to remediate vulnerabilities during development.
8.1
%
Sample: 160