In the server-side test, in addition to considering the compatibility of the service-side business functions and API, the stability of the server-side and the carrying capacity of the server-side under high concurrent requests need to be considered. Regarding the number of concurrency and specific response time requirements, in fact, the form of each product is different, and it is difficult to use standard terms to unify. It depends on the business form that the tested component is facing. If the business form is a product that is rarely used, there is actually no requirement for performance. Therefore, this must be based on the architectural design of the component under test, the amount of load and business goals. This article mainly shares the use of Python language to write a simple test code for concurrent requests.
In the concurrent programming model of Python, the main points involved are threads and processes, as well as corresponding coroutines. And locust is mainly designed based on coroutines, which can be understood as microthreads. In IO-intensive and CPU-intensive, if it is IO-intensive, it is recommended to use multithreading to be more efficient, if it is CPU-intensive, it is recommended to use multi-process method to be more efficient. This article mainly shares the method based on IO intensiveness, that is, multi-threading. The way to start a thread is very simple. We can use functional programming or object-oriented programming. See the following specific case code, functional way:
from threading import Thread
import time as t
import random
def job(name):print('I'm{0},I want to start work'.format(name))if __name__ =='__main__':
t=Thread(target=job,args=('Li Si',))
t.start()print('End of main thread execution')
Object-oriented approach:
from threading import Thread
import time as t
import random
classJob(Thread):
def __init__(self,name):super().__init__()
self.name=name
def run(self)-> None:print('I'm{0},I want to start work'.format(self.name))if __name__ =='__main__':
t=Job('Li Si')
t.start()print('End of main thread program execution')
In fact, in the Thread class, the return value of the tested function is not returned. That is to say, when we test the API interface, we need to get the status code of the tested interface, request response time, and response data, then we need After re-inheriting the Thread class, rewrite the run() method to get the data we expect in the tested function. The specific case code is as follows:
#! coding:utf-8from threading import Thread
classThreadTest(Thread):
def __init__(self,func,args=()):'''
: param func:Function being tested
: param args:The return value of the function being tested
'''
super(ThreadTest,self).__init__()
self.func=func
self.args=args
def run(self)-> None:
self.result=self.func(*self.args)
def getResult(self):try:return self.result
except BaseException as e:return e.args[0]
Here we take the test of Baidu homepage as a case. After concurrent requests are received, the response time and status code of the concurrent request are obtained, and then the median and other data are obtained according to the response time. The specific complete case code is as follows:
#! /usr/bin/env python
#! coding:utf-8from threading import Thread
import requests
import matplotlib.pyplot as plt
import datetime
import time
import numpy as np
import json
classThreadTest(Thread):
def __init__(self,func,args=()):'''
: param func:Function being tested
: param args:The return value of the function being tested
'''
super(ThreadTest,self).__init__()
self.func=func
self.args=args
def run(self)-> None:
self.result=self.func(*self.args)
def getResult(self):try:return self.result
except BaseException as e:return e.args[0]
def baiDu(code,seconds):'''
: param code:status code
: param seconds:Request response time
: return:'''
r=requests.get(url='http://www.baidu.com/')
code=r.status_code
seconds=r.elapsed.total_seconds()return code,seconds
def calculationTime(startTime,endTime):'''Calculate the difference between two times, the unit is seconds'''return(endTime-startTime).seconds
def getResult(seconds):'''Get the response time information of the server'''
data={'Max':sorted(seconds)[-1],'Min':sorted(seconds)[0],'Median':np.median(seconds),'99%Line':np.percentile(seconds,99),'95%Line':np.percentile(seconds,95),'90%Line':np.percentile(seconds,90)}return data
def highConcurrent(count):'''
Send high concurrent requests to the server
: param cout:Concurrency
: return:'''
startTime=datetime.datetime.now()
sum=0
list_count=list()
tasks=list()
results =list()
# Failed message
fails=[]
# Number of successful tasks
success=[]
codes =list()
seconds =list()for i inrange(1,count):
t=ThreadTest(baiDu,args=(i,i))
tasks.append(t)
t.start()for t in tasks:
t.join()if t.getResult()[0]!=200:
fails.append(t.getResult())
results.append(t.getResult())
endTime=datetime.datetime.now()for item in results:
codes.append(item[0])
seconds.append(item[1])for i inrange(len(codes)):
list_count.append(i)
# Generate a visual trend chart
fig,ax=plt.subplots()
ax.plot(list_count,seconds)
ax.set(xlabel='number of times', ylabel='Request time-consuming',
title='olap continuous request response time (seconds)')
ax.grid()
fig.savefig('olap.png')
plt.show()for i in seconds:
sum+=i
rate=sum/len(list_count)
# print('\n total duration:\n',endTime-startTime)
totalTime=calculationTime(startTime=startTime,endTime=endTime)if totalTime<1:
totalTime=1
# Throughput calculation
try:
throughput=int(len(list_count)/totalTime)
except Exception as e:print(e.args[0])getResult(seconds=seconds)
errorRate=0iflen(fails)==0:
errorRate=0.00else:
errorRate=len(fails)/len(tasks)*100
throughput=str(throughput)+'/S'
timeData=getResult(seconds=seconds)
dict1={'Throughput':throughput,'Average response time':rate,'Response time':timeData,'Error rate':errorRate,'Total number of requests':len(list_count),'Number of failures':len(fails)}return json.dumps(dict1,indent=True,ensure_ascii=False)if __name__ =='__main__':print(highConcurrent(count=1000))
After the above code is executed, a visual request response time and other information will be formed, and the following information will be displayed after execution:
{" Throughput":"500/S","Average response time":0.08835436199999998,"Response time":{"Max":1.5547,"Min":0.068293,"Median":0.0806955,"99%Line":0.12070111,"95%Line":0.10141509999999998,"90%Line":0.0940216},"Error rate":0.0,"Total number of requests":1000,"Number of failures":0}
Thank you for your reading and attention. We will update it again in the future and provide it as an interface that can be specifically called. Using code, you can quickly verify the carrying capacity of the server and whether there is TimeOut in the work.
Recommended Posts