日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 运维知识 > linux >内容正文

linux

linux多少个端口,Linux允许python使用多少个网络端口?

發布時間:2024/10/5 linux 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 linux多少个端口,Linux允许python使用多少个网络端口? 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

所以我一直在嘗試在

python中多線程一些互聯網連接.我一直在使用多處理模塊,所以我可以繞過“Global Interpreter Lock”.但似乎系統只為python提供了一個開放的連接端口,或者至少它只允許一次連接發生.這是我所說的一個例子.

*請注意,這是在Linux服務器上運行

from multiprocessing import Process,Queue

import urllib

import random

# Generate 10,000 random urls to test and put them in the queue

queue = Queue()

for each in range(10000):

rand_num = random.randint(1000,10000)

url = ('http://www.' + str(rand_num) + '.com')

queue.put(url)

# Main funtion for checking to see if generated url is active

def check(q):

while True:

try:

url = q.get(False)

try:

request = urllib.urlopen(url)

del request

print url + ' is an active url!'

except:

print url + ' is not an active url!'

except:

if q.empty():

break

# Then start all the threads (50)

for thread in range(50):

task = Process(target=check,args=(queue,))

task.start()

因此,如果你運行它,你會注意到它在函數上啟動了50個實例,但一次只運行一個.您可能認為“全球口譯員鎖”正在這樣做但事實并非如此.嘗試將函數更改為數學函數而不是網絡請求,您將看到所有50個線程同時運行.

那么我必須使用套接字嗎?或者我能做些什么可以讓python訪問更多端口?或者有什么我沒有看到的?讓我知道你的想法!謝謝!

*編輯

所以我編寫了這個腳本來更好地使用請求庫進行測試.好像我之前沒有對它進行過這樣的測試. (我主要使用urllib和urllib2)

from multiprocessing import Process,Queue

from threading import Thread

from Queue import Queue as Q

import requests

import time

# A main timestamp

main_time = time.time()

# Generate 100 urls to test and put them in the queue

queue = Queue()

for each in range(100):

url = ('http://www.' + str(each) + '.com')

queue.put(url)

# Timer queue

time_queue = Queue()

# Main funtion for checking to see if generated url is active

def check(q,t_q): # args are queue and time_queue

while True:

try:

url = q.get(False)

# Make a timestamp

t = time.time()

try:

request = requests.head(url,timeout=5)

t = time.time() - t

t_q.put(t)

del request

except:

t = time.time() - t

t_q.put(t)

except:

break

# Then start all the threads (20)

thread_list = []

for thread in range(20):

task = Process(target=check,time_queue))

task.start()

thread_list.append(task)

# Join all the threads so the main process don't quit

for each in thread_list:

each.join()

main_time_end = time.time()

# Put the timerQueue into a list to get the average

time_queue_list = []

while True:

try:

time_queue_list.append(time_queue.get(False))

except:

break

# Results of the time

average_response = sum(time_queue_list) / float(len(time_queue_list))

total_time = main_time_end - main_time

line = "Multiprocessing: Average response time: %s sec. -- Total time: %s sec." % (average_response,total_time)

print line

# A main timestamp

main_time = time.time()

# Generate 100 urls to test and put them in the queue

queue = Q()

for each in range(100):

url = ('http://www.' + str(each) + '.com')

queue.put(url)

# Timer queue

time_queue = Queue()

# Main funtion for checking to see if generated url is active

def check(q,timeout=5)

t = time.time() - t

t_q.put(t)

del request

except:

t = time.time() - t

t_q.put(t)

except:

break

# Then start all the threads (20)

thread_list = []

for thread in range(20):

task = Thread(target=check,time_queue))

task.start()

thread_list.append(task)

# Join all the threads so the main process don't quit

for each in thread_list:

each.join()

main_time_end = time.time()

# Put the timerQueue into a list to get the average

time_queue_list = []

while True:

try:

time_queue_list.append(time_queue.get(False))

except:

break

# Results of the time

average_response = sum(time_queue_list) / float(len(time_queue_list))

total_time = main_time_end - main_time

line = "Standard Threading: Average response time: %s sec. -- Total time: %s sec." % (average_response,total_time)

print line

# Do the same thing all over again but this time do each url at a time

# A main timestamp

main_time = time.time()

# Generate 100 urls and test them

timer_list = []

for each in range(100):

url = ('http://www.' + str(each) + '.com')

t = time.time()

try:

request = requests.head(url,timeout=5)

timer_list.append(time.time() - t)

except:

timer_list.append(time.time() - t)

main_time_end = time.time()

# Results of the time

average_response = sum(timer_list) / float(len(timer_list))

total_time = main_time_end - main_time

line = "Not using threads: Average response time: %s sec. -- Total time: %s sec." % (average_response,total_time)

print line

如您所見,它是多線程的.實際上,我的大部分測試表明,線程模塊實際上比多處理模塊更快. (我不明白為什么!)以下是我的一些結果.

Multiprocessing: Average response time: 2.40511314869 sec. -- Total time: 25.6876308918 sec.

Standard Threading: Average response time: 2.2179402256 sec. -- Total time: 24.2941861153 sec.

Not using threads: Average response time: 2.1740363431 sec. -- Total time: 217.404567957 sec.

這是在我的家庭網絡上完成的,我服務器上的響應時間要快得多.我認為我的問題是間接回答的,因為我在一個更復雜的腳本上遇到了問題.所有的建議都幫助我很好地優化了它.謝謝大家!

總結

以上是生活随笔為你收集整理的linux多少个端口,Linux允许python使用多少个网络端口?的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。