« 1 2» Pages: ( 1/2 total )
本页主题: google代理上网GAppProxy如何登陆twitter(翻墙有理,新手向)  上一主题 | 下一主题 | 底端

暁美ほむら
级别: 澄空三年生上学期

精华: 0
发帖: 2754
学分: 19 点
澄空币: 664 KID
GPA: 1 点
注册时间:2010-04-26
最后登录:2014-09-14


 google代理上网GAppProxy如何登陆twitter(翻墙有理,新手向)

管理提醒:
本帖被 clowwindy 执行取消锁定操作(2011-01-23)
(本文为新手向)文章中提到的所有文件均以打包,欢迎下载研究学习http://tora.to/blog/445000.htm
(请用附件的fetchServer,JS那个我传错了)
翻墙是一个和谐的词语,不信上谷歌搜“翻墙”,后果很有趣!前两天因某人的一句“我用收费代理商twitter的”,感到压力很大同时,也开始对twitter感兴趣。
无奈,天朝高高的墙已经把我们给围起来了。于是便自备梯子,准备翻墙!
最简单的就是代理服务器IP代理上网,但是找半天都找不到几个好用的,更不用说找干净没有广告又稳定的代理ip了。
看到网上评论基于Google app engine平台的GAppProxy 代理上网代理上网速度很不错,而且很稳定。好吧,看到google的那一瞬间,我已被征服了,谁叫我是一个google爱好者呢!
但是,看到以下文字,不由得菊花一紧!
但最遗憾的是使用GAppProxy代理上网虽然能正常访问twitter,但只限于浏览,登陆不了twitter和YouTube,不能注册、更不用说在twitter上follow和发表微博客。原以为这只是由于twitter网站登陆要求的安全性和GAE平台的局限性造成的(09年开发的windows客户端开始支持https,但是仍不支持https认证证书,https 安全仍是个问题),因为我用GAppProxy代理上网登陆新浪没问题,登陆企博网有问题,登陆twitter也有问题,所以一段时间都没有再用。
后来看到网上说,由于GAppProxy存在两个bug导致了一些网站的登陆问题:其中一个是刚才提到的https证书问题;另外一个是由于GAppProxy对cookies处理不当,因为它对header 中的多个Set-Cookie 域处理错误,导致用户登录twitter等网站错误,无法获得正确的会话Cookie。 但是为了推倒傲娇娘,岂能退缩!于是,我便开始了翻墙之旅。
首先,在这里解析几个名词,没有兴趣的可以无视


1. GAppProxy 是什么?
GAppProxy 是基于 Google app engine,为教育网用户提供一个免费的国际代理。由于借助了 Google 强大的服务器,所以也适用于公网的代理。
2. Google app engine 是什么?
Google app engine 是 Google 提供的一个在线应用程序平台,支持 Python。简单的说是在 Google app engine 上面直接运行用 Python 写的程序,由 Google app engine 提供网络空间和带宽。
3. 用 GAppProxy 能干什么?
如果你在教育网,你可以把 GAppProxy 当作一个国际代理服务器,类似搜狗浏览器的教育网加速。
如果你在公网,正常情况下用不到 GAppProxy,但如果想访问某些低俗网站,还是用得到的。简单来说,这就是平时某些文章中的提到的“请自备梯子”的梯子。
 详细安装步骤(网上有很多教程,但是都有不能登录twitter,youbtu看不了视频等BUG)
1.注册 Google App Engine: 用 Google 账户 登录 Google App Engine 后,点击 Create an Application 创建一个应用程序。(谷歌账户:登录www.google.com.hk,点右上角的登录,就会出现登录界面,点“现在就创建一个账户”,跟注册论坛账号差不多,不过验证码很XD) 之后会要求你 输入手机号码,接受创建验证码,必须有此过程,否则无法注册成功。

把收到的手机信息里的验证码输入后就进入了创建应用程序的详细设置界面
填写 Application Identifier (例如输入yourname,相应会得到一个 yourname.appspot.com 的域名,记住这个。) 和 Application Title (标题,随意啦)以及勾选同意服务条款,点 Save 即完成创建。


2. 下载并安装 Python 和 Google App Engine SKD
Python的安装跟普通软件没有什么区别。
Win7和vista的用户请注意,GoogleAppEngineSKD尽量不要装在C盘,因为在C盘覆盖/修改文件要拥有超级管理员权限,而系统默认提供的用户并没有超级管理员权限。以免在后面要覆盖/修改文件时遇到麻烦

配置fetchServer
(从这里开始,就跟网上的教程不一样了,也是能消除某些BUG的关键)

 解压缩 fetchServer 文件夹至 GooglAppEngine SKD 安装目录,默认为 X:\Program Files\Google\google_appengine\fetchserver。用记事  本打开刚解压的 fetchServer 文件夹内的 app.yaml 文件,修改第一行的 ckhuaxiang 为刚才输入的 Application Identifier,yourname.appspot.com 中的 yourname。请注意,有时候打开文件,代码是连在一起的,你可能会看到“application: ckhuaxiangversion: 1runtime: pythonapi_versi”,请只改ckhuaxiang部分即可。
上传 fetchserver
打开命令提示符(点击 开始 > 运行 > cmd),打  cd  X:\Program Files\Google\google_appengine\进入目录,然后
输人appcfg.py update  fetchServer,(命令是可以复制的,注意改掉X就好)回车,之后会要求你输入 Google 账户及密码(请注意,在CMD下输入密码是看不到*的,请无视这种情况,正确输入即可)之后关闭结束上传。如图


测试 fetchserver打开浏览器,进入 http://yourname.appspot.com/fetch.py 如果得到下面的页面,证明安装成功。

使用代理
打开文件localproxy,用记事本打开proxy.conf,在最下面一行,把“fetch_server = http://ckhuaxiang.appspot.com/fetch.py”中的ckhuaxiang改成刚才创建应用程序填的yourname,保存。
浏览器代理服务器地址和端口设置为127.0.0.1:8000。
在Firefox中(菜单-工具-选项-高级-网络-连接-设置-手动配置代理-HTTP代理和端口分别填写127.0.0.1和8000-勾选为所有协议使用相同代理-确定,)
IE中,工具-Internet选项-链接-局域网设置,如图





其实这一个步骤不同浏览器大同小异
打开文件localproxy,双击proxy.py,正常情况会出现,如图





代理上网时请不要关闭此窗口。另外,不用代理时,把浏览器改回原来的状态。
登陆
http://www.ip138.com/,ip地址为72.14.192.65,

 


输入网址www.twitter.com测试是否能正常打开和浏览了,OK。注意,在twitter等网站登录时,浏览器会提示证书错误,直接点继续浏览即可。
到此,安装全部完成。(非常感谢在我学习中给予帮助的风风,C大) 


后话
GAppProxy存在两个bug导致了一些网站的登陆问题:其中一个是https证书问题;另外一个是由于GAppProxy对cookies处理不当,因为它对header 中的多个Set-Cookie 域处理错误,导致用户登录twitter等网站错误,无法获得正确的会话Cookie。
现在网上没有给出解决方案(给出的方案个人实践过都不行),于是我便到http://code.google.com/p/gappproxy/(请自备梯子)上面试一试看。
结果在
http://code.google.com/p/gappproxy/source/detail?r=102这里找到3个更新的文件,于是便下下来,替换了服务端localproxy里的的2个同名文件,另外一个fetch.py上传到服务端。(在此要感谢C大给我的localproxy文件,貌似现在网上提供下载的服务端GAppProxy都是一个程序包,难以替换文件来消除BUG)
另外,用代理登陆twitter,上传不了头像相片(老是提示文件过大,示捶地),CK也登陆不进去(我这不是想用马甲做坏事啦)
下是那3个文件的代码,有兴趣的高手可以看看,希望能给予改进,造福淫民

proxy.py

Quote:
#! /usr/bin/env python
# coding=utf-8
#############################################################################
#                                                                           #
#   File: proxy.py                                                          #
#                                                                           #
#   Copyright (C) 2008-2009 Du XiaoGang <[email protected]>                    #
#                                                                           #
#   Home: http://gappproxy.googlecode.com                                   #
#                                                                           #
#   This file is part of GAppProxy.                                         #
#                                                                           #
#   GAppProxy is free software: you can redistribute it and/or modify       #
#   it under the terms of the GNU General Public License as                 #
#   published by the Free Software Foundation, either version 3 of the      #
#   License, or (at your option) any later version.                         #
#                                                                           #
#   GAppProxy is distributed in the hope that it will be useful,            #
#   but WITHOUT ANY WARRANTY; without even the implied warranty of          #
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the           #
#   GNU General Public License for more details.                            #
#                                                                           #
#   You should have received a copy of the GNU General Public License       #
#   along with GAppProxy.  If not, see <http://www.gnu.org/licenses/>.      #
#                                                                           #
#############################################################################import BaseHTTPServer, SocketServer, urllib, urllib2, urlparse, zlib, socket, os, common, sys, errno, base64, re
try:
    import ssl
    ssl_enabled = True
except:
    ssl_enabled = False# global varibles
listen_port = common.DEF_LISTEN_PORT
local_proxy = common.DEF_LOCAL_PROXY
fetch_server = common.DEF_FETCH_SERVER
google_proxy = {}class LocalProxyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
    PostDataLimit = 0x100000    def do_CONNECT(self):
        if not ssl_enabled:
            self.send_error(501, "Local proxy error, HTTPS needs Python2.6 or later.")
            self.connection.close()
            return        # for ssl proxy
        (https_host, _, https_port) = self.path.partition(":")
        if https_port != "" and https_port != "443":
            self.send_error(501, "Local proxy error, Only port 443 is allowed for https.")
            self.connection.close()
            return        # continue
        self.wfile.write("HTTP/1.1 200 OK\r\n")
        self.wfile.write("\r\n")
        ssl_sock = ssl.SSLSocket(self.connection, server_side=True, certfile=common.DEF_CERT_FILE, keyfile=common.DEF_KEY_FILE)        # rewrite request line, url to abs
        first_line = ""
        while True:
            chr = ssl_sock.read(1)
            # EOF?
            if chr == "":
                # bad request
                ssl_sock.close()
                self.connection.close()
                return
            # newline(\r\n)?
            if chr == "\r":
                chr = ssl_sock.read(1)
                if chr == "\n":
                    # got
                    break
                else:
                    # bad request
                    ssl_sock.close()
                    self.connection.close()
                    return
            # newline(\n)?
            if chr == "\n":
                # got
                break
            first_line += chr
        # got path, rewrite
        (method, path, ver) = first_line.split()
        if path.startswith("/"):
            path = "https://%s" % https_host + path        # connect to local proxy server
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        sock.connect(("127.0.0.1", listen_port))
        sock.send("%s %s %s\r\n" % (method, path, ver))        # forward https request
        ssl_sock.settimeout(1)
        while True:
            try:
                data = ssl_sock.read(8192)
            except ssl.SSLError, e:
                if str(e).lower().find("timed out") == -1:
                    # error
                    sock.close()
                    ssl_sock.close()
                    self.connection.close()
                    return
                # timeout
                break
            if data != "":
                sock.send(data)
            else:
                # EOF
                break
        ssl_sock.setblocking(True)        # simply forward response
        while True:
            data = sock.recv(8192)
            if data != "":
                ssl_sock.write(data)
            else:
                # EOF
                break        # clean
        sock.close()
        ssl_sock.shutdown(socket.SHUT_WR)
        ssl_sock.close()
        self.connection.close()
  
    def do_METHOD(self):
        # check http method and post data
        method = self.command
        if method == "GET" or method == "HEAD":
            # no post data
            post_data_len = 0
        elif method == "POST":
            # get length of post data
            post_data_len = 0
            for header in self.headers:
                if header.lower() == "content-length":
                    post_data_len = int(self.headers[header])
                    break
            # exceed limit?
            if post_data_len > self.PostDataLimit:
                self.send_error(413, "Local proxy error, Sorry, Google's limit, file size up to 1MB.")
                self.connection.close()
                return
        else:
            # unsupported method
            self.send_error(501, "Local proxy error, Method not allowed.")
            self.connection.close()
            return        # get post data
        post_data = ""
        if post_data_len > 0:
            post_data = self.rfile.read(post_data_len)
            if len(post_data) != post_data_len:
                # bad request
                self.send_error(400, "Local proxy error, Post data length error.")
                self.connection.close()
                return        # do path check
        (scm, netloc, path, params, query, _) = urlparse.urlparse(self.path)
        if (scm.lower() != "http" and scm.lower() != "https") or not netloc:
            self.send_error(501, "Local proxy error, Unsupported scheme(ftp for example).")
            self.connection.close()
            return
        # create new path
        path = urlparse.urlunparse((scm, netloc, path, params, query, ""))        # remove disallowed header
        dhs = []
        for header in self.headers:
            hl = header.lower()
            if hl == "if-range":
                dhs.append(header)
            elif hl == "range":
                dhs.append(header)
        for dh in dhs:
            del self.headers[dh]
        # create request for GAppProxy
        params = urllib.urlencode({"method": method,
                                   "encoded_path": base64.b64encode(path),
                                   "headers": self.headers,
                                   "postdata": post_data,
                                   "version": common.VERSION})
        # accept-encoding: identity, *;q=0
        # connection: close
        request = urllib2.Request(fetch_server)
        request.add_header("Accept-Encoding", "identity, *;q=0")
        request.add_header("Connection", "close")
        # create new opener
        if local_proxy != "":
            proxy_handler = urllib2.ProxyHandler({"http": local_proxy})
        else:
            proxy_handler = urllib2.ProxyHandler(google_proxy)
        opener = urllib2.build_opener(proxy_handler)
        # set the opener as the default opener
        urllib2.install_opener(opener)
        try:
            resp = urllib2.urlopen(request, params)
        except urllib2.HTTPError, e:
            if e.code == 404:
                self.send_error(404, "Local proxy error, Fetchserver not found at the URL you specified, please check it.")
            elif e.code == 502:
                self.send_error(502, "Local proxy error, Transmission error, or the fetchserver is too busy.")
            else:
                self.send_error(e.code)
            self.connection.close()
            return        # parse resp
        # for status line
        line = resp.readline()
        words = line.split()
        status = int(words[1])
        reason = " ".join(words[2:])        # for large response
        if status == 592 and method == "GET":
            self.processLargeResponse(path)
            self.connection.close()
            return        # normal response
        try:
            self.send_response(status, reason)
        except socket.error, (err, _):
            # Connection/Webpage closed before proxy return
            if err == errno.EPIPE or err == 10053: # *nix, Windows
                return
            else:
                raise        # for headers
        text_content = True
        while True:
            line = resp.readline()
            line = line.strip()
            # end header?
            if line == "":
                break
            # header
            (name, _, value) = line.partition(":")
            name = name.strip()
            value = value.strip()
            # ignore Accept-Ranges
            if name.lower() == "accept-ranges":
                continue
            self.send_header(name, value)
            # check Content-Type
            if name.lower() == "content-type":
                if value.lower().find("text") == -1:
                    # not text
                    text_content = False
        self.send_header("Accept-Ranges", "none")
        self.end_headers()        # for page
        if text_content:
            data = resp.read()
            if len(data) > 0:
                self.wfile.write(zlib.decompress(data))
        else:
            self.wfile.write(resp.read())
        self.connection.close()    do_GET = do_METHOD
    do_HEAD = do_METHOD
    do_POST = do_METHOD    def processLargeResponse(self, path):
        cur_pos = 0
        part_length = 0x100000 # 1m initial, at least 64k
        first_part = True
        content_length = 0
        text_content = True
        allowed_failed = 10        while allowed_failed > 0:
            next_pos = 0
            self.headers["Range"] = "bytes=%d-%d" % (cur_pos, cur_pos + part_length - 1)
            # create request for GAppProxy
            params = urllib.urlencode({"method": "GET",
                                       "encoded_path": base64.b64encode(path),
                                       "headers": self.headers,
                                       "postdata": "",
                                       "version": common.VERSION})
            # accept-encoding: identity, *;q=0
            # connection: close
            request = urllib2.Request(fetch_server)
            request.add_header("Accept-Encoding", "identity, *;q=0")
            request.add_header("Connection", "close")
            # create new opener
            if local_proxy != "":
                proxy_handler = urllib2.ProxyHandler({"http": local_proxy})
            else:
                proxy_handler = urllib2.ProxyHandler(google_proxy)
            opener = urllib2.build_opener(proxy_handler)
            # set the opener as the default opener
            urllib2.install_opener(opener)
            resp = urllib2.urlopen(request, params)            # parse resp
            # for status line
            line = resp.readline()
            words = line.split()
            status = int(words[1])
            # not range response?
            if status != 206:
                # reduce part_length and try again
                if part_length > 65536:
                    part_length /= 2
                allowed_failed -= 1
                continue            # for headers
            if first_part:
                self.send_response(200, "OK")
                while True:
                    line = resp.readline().strip()
                    # end header?
                    if line == "":
                        break
                    # header
                    (name, _, value) = line.partition(":")
                    name = name.strip()
                    value = value.strip()
                    # get total length from Content-Range
                    nl = name.lower()
                    if nl == "content-range":
                        m = re.match(r"bytes[ \t]+([0-9]+)-([0-9]+)/([0-9]+)", value)
                        if not m or int(m.group(1)) != cur_pos:
                            # Content-Range error, fatal error
                            return
                        next_pos = int(m.group(2)) + 1
                        content_length = int(m.group(3))
                        continue
                    # ignore Content-Length
                    elif nl == "content-length":
                        continue
                    # ignore Accept-Ranges
                    elif nl == "accept-ranges":
                        continue
                    self.send_header(name, value)
                    # check Content-Type
                    if nl == "content-type":
                        if value.lower().find("text") == -1:
                            # not text
                            text_content = False
                if content_length == 0:
                    # no Content-Length, fatal error
                    return
                self.send_header("Content-Length", content_length)
                self.send_header("Accept-Ranges", "none")
                self.end_headers()
                first_part = False
            else:
                while True:
                    line = resp.readline().strip()
                    # end header?
                    if line == "":
                        break
                    # header
                    (name, _, value) = line.partition(":")
                    name = name.strip()
                    value = value.strip()
                    # get total length from Content-Range
                    if name.lower() == "content-range":
                        m = re.match(r"bytes[ \t]+([0-9]+)-([0-9]+)/([0-9]+)", value)
                        if not m or int(m.group(1)) != cur_pos:
                            # Content-Range error, fatal error
                            return
                        next_pos = int(m.group(2)) + 1
                        continue            # for body
            if text_content:
                data = resp.read()
                if len(data) > 0:
                    self.wfile.write(zlib.decompress(data))
            else:
                self.wfile.write(resp.read())            # next part?
            if next_pos == content_length:
                return
            cur_pos = next_posclass ThreadingHTTPServer(SocketServer.ThreadingMixIn, BaseHTTPServer.HTTPServer):
    passdef shallWeNeedGoogleProxy():
    global google_proxy    # send http request directly
    request = urllib2.Request(common.LOAD_BALANCE)
    try:
        # avoid wait too long at startup, timeout argument need py2.6 or later.
        if sys.hexversion >= 0x20600f0:
            resp = urllib2.urlopen(request, timeout=3)
        else:
            resp = urllib2.urlopen(request)
        resp.read()
    except:
        google_proxy = {"http": common.GOOGLE_PROXY}def getAvailableFetchServer():
    request = urllib2.Request(common.LOAD_BALANCE)
    if local_proxy != "":
        proxy_handler = urllib2.ProxyHandler({"http": local_proxy})
    else:
        proxy_handler = urllib2.ProxyHandler(google_proxy)
    opener = urllib2.build_opener(proxy_handler)
    urllib2.install_opener(opener)
    try:
        resp = urllib2.urlopen(request)
        return resp.read().strip()
    except:
        return ""def parseConf(confFile):
    global listen_port, local_proxy, fetch_server    # read config file
    try:
        fp = open(confFile, "r")
    except IOError:
        # use default parameters
        return
    # parse user defined parameters
    while True:
        line = fp.readline()
        if line == "":
            # end
            break
        # parse line
        line = line.strip()
        if line == "":
            # empty line
            continue
        if line.startswith("#"):
            # comments
            continue
        (name, sep, value) = line.partition("=")
        if sep == "=":
            name = name.strip().lower()
            value = value.strip()
            if name == "listen_port":
                listen_port = int(value)
            elif name == "local_proxy":
                local_proxy = value
            elif name == "fetch_server":
                fetch_server = value
    fp.close()if __name__ == "__main__":
    parseConf(common.DEF_CONF_FILE)    if local_proxy == "":
        shallWeNeedGoogleProxy()    if fetch_server == "":
        fetch_server = getAvailableFetchServer()
    if fetch_server == "":
        raise common.GAppProxyError("Invalid response from load balance server.")    print "--------------------------------------------"
    print "HTTPS Enabled: %s" % (ssl_enabled and "YES" or "NO")
    print "Direct Fetch : %s" % (google_proxy and "NO" or "YES")
    print "Listen Addr  : 127.0.0.1:%d" % listen_port
    print "Local Proxy  : %s" % local_proxy
    print "Fetch Server : %s" % fetch_server
    print "--------------------------------------------"
    httpd = ThreadingHTTPServer(("127.0.0.1", listen_port), LocalProxyHandler)
    httpd.serve_forever()

fetch.py
Quote:
#! /usr/bin/env python
# coding=utf-8
#############################################################################
#                                                                           #
#   File: fetch.py                                                          #
#                                                                           #
#   Copyright (C) 2008-2009 Du XiaoGang <[email protected]>                    #
#                                                                           #
#   Home: http://gappproxy.googlecode.com                                   #
#                                                                           #
#   This file is part of GAppProxy.                                         #
#                                                                           #
#   GAppProxy is free software: you can redistribute it and/or modify       #
#   it under the terms of the GNU General Public License as                 #
#   published by the Free Software Foundation, either version 3 of the      #
#   License, or (at your option) any later version.                         #
#                                                                           #
#   GAppProxy is distributed in the hope that it will be useful,            #
#   but WITHOUT ANY WARRANTY; without even the implied warranty of          #
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the           #
#   GNU General Public License for more details.                            #
#                                                                           #
#   You should have received a copy of the GNU General Public License       #
#   along with GAppProxy.  If not, see <http://www.gnu.org/licenses/>.      #
#                                                                           #
#############################################################################import wsgiref.handlers, urlparse, StringIO, logging, base64, zlib, re
from google.appengine.ext import webapp
from google.appengine.api import urlfetch
from google.appengine.api import urlfetch_errorsclass MainHandler(webapp.RequestHandler):
    Software = "GAppProxy/1.2.0"
    # hop to hop header should not be forwarded
    H2H_Headers = ["connection", "keep-alive", "proxy-authenticate", "proxy-authorization", "te", "trailers", "transfer-encoding", "upgrade"]
    Forbid_Headers = ["if-range"]
    Fetch_Max = 3    def sendErrorPage(self, status, description):
        self.response.headers["Content-Type"] = "application/octet-stream"
        # http over http
        # header
        self.response.out.write("HTTP/1.1 %d %s\r\n" % (status, description))
        self.response.out.write("Server: %s\r\n" % self.Software)
        self.response.out.write("Content-Type: text/html\r\n")
        self.response.out.write("\r\n")
        # body
        content = "<h1>Fetch Server Error</h1><p>Error Code: %d<p>Message: %s" % (status, description)
        self.response.out.write(zlib.compress(content))    def post(self):
        try:
            # get post data
            orig_method = self.request.get("method").encode("utf-8")
            orig_path = base64.b64decode(self.request.get("encoded_path").encode("utf-8"))
            orig_headers = self.request.get("headers").encode("utf-8")
            orig_post_data = self.request.get("postdata").encode("utf-8")            # check method
            if orig_method != "GET" and orig_method != "HEAD" and orig_method != "POST":
                # forbid
                self.sendErrorPage(590, "Invalid local proxy, Method not allowed.")
                return
            if orig_method == "GET":
                method = urlfetch.GET
            elif orig_method == "HEAD":
                method = urlfetch.HEAD
            elif orig_method == "POST":
                method = urlfetch.POST            # check path
            (scm, netloc, path, params, query, _) = urlparse.urlparse(orig_path)
            if (scm.lower() != "http" and scm.lower() != "https") or not netloc:
                self.sendErrorPage(590, "Invalid local proxy, Unsupported Scheme.")
                return
            # create new path
            new_path = urlparse.urlunparse((scm, netloc, path, params, query, ""))            # make new headers
            new_headers = {}
            content_length = 0
            si = StringIO.StringIO(orig_headers)
            while True:
                line = si.readline()
                line = line.strip()
                if line == "":
                    break
                # parse line
                (name, _, value) = line.partition(":")
                name = name.strip()
                value = value.strip()
                nl = name.lower()
                if nl in self.H2H_Headers or nl in self.Forbid_Headers:
                    # don't forward
                    continue
                new_headers[name] = value
                if name.lower() == "content-length":
                    content_length = int(value)
            # predined header
            new_headers["Connection"] = "close"            # check post data
            if content_length != 0:
                if content_length != len(orig_post_data):
                    logging.warning("Invalid local proxy, Wrong length of post data, %d!=%d." % (content_length, len(orig_post_data)))
                    #self.sendErrorPage(590, "Invalid local proxy, Wrong length of post data, %d!=%d." % (content_length, len(orig_post_data)))
                    #return
            else:
                orig_post_data = ""
            if orig_post_data != "" and orig_method != "POST":
                self.sendErrorPage(590, "Invalid local proxy, Inconsistent method and data.")
                return
        except Exception, e:
            self.sendErrorPage(591, "Fetch server error, %s." % str(e))
            return        # fetch, try * times
        range_request = False
        for i in range(self.Fetch_Max):
            try:
                # the last time, try with Range
                if i == self.Fetch_Max - 1 and method == urlfetch.GET and not new_headers.has_key("Range"):
                    range_request = True
                    new_headers["Range"] = "bytes=0-65535"
                # fetch
                resp = urlfetch.fetch(new_path, orig_post_data, method, new_headers, False, False)
                # ok, got
                if range_request:
                    range_supported = False
                    for h in resp.headers:
                        if h.lower() == "accept-ranges":
                            if resp.headers[h].strip().lower() == "bytes":
                                range_supported = True
                                break
                        elif h.lower() == "content-range":
                            range_supported = True
                            break
                    if range_supported:
                        self.sendErrorPage(592, "Fetch server error, Retry with range header.")
                    else:
                        self.sendErrorPage(591, "Fetch server error, Sorry, file size up to Google's limit and the target server doesn't accept Range request.")
                    return
                break
            except Exception, e:
                logging.warning("urlfetch.fetch(%s) error: %s." % (range_request and "Range" or "", str(e)))
        else:
            self.sendErrorPage(591, "Fetch server error, The target server may be down or not exist. Another possibility: try to request the URL directly.")
            return        # forward
        self.response.headers["Content-Type"] = "application/octet-stream"
        # status line
        self.response.out.write("HTTP/1.1 %d %s\r\n" % (resp.status_code, self.response.http_status_message(resp.status_code)))
        # headers
        # default Content-Type is text
        text_content = True
        for header in resp.headers:
            if header.strip().lower() in self.H2H_Headers:
                # don"t forward
                continue
            # there may have some problems on multi-cookie process in urlfetch.
            # Set-Cookie: "wordpress=lovelywcm%7C1248344625%7C26c45bab991dcd0b1f3bce6ae6c78c92; expires=Thu, 23-Jul-2009 10:23:45 GMT; path=/wp-content/plugins; domain=.wordpress.com; httponly, wordpress=lovelywcm%7C1248344625%7C26c45bab991dcd0b1f3bce6ae6c78c92; expires=Thu, 23-Jul-2009 10:23:45 GMT; path=/wp-content/plugins; domain=.wordpress.com; httponly,wordpress=lovelywcm%7C1248344625%7C26c45bab991dcd0b1f3bce6ae6c78c92; expires=Thu, 23-Jul-2009 10:23:45 GMT; path=/wp-content/plugins; domain=.wordpress.com; httponly
            if header.lower() == "set-cookie":
                scs = resp.headers[header].split(",")
                nsc = ""
                for sc in scs:
                    if nsc == "":
                        nsc = sc
                    elif re.match(r"[ \t]*[0-9]", sc):
                        # expires 2nd part
                        nsc += "," + sc
                    else:
                        # new one
                        self.response.out.write("%s: %s\r\n" % (header, nsc.strip()))
                        nsc = sc
                self.response.out.write("%s: %s\r\n" % (header, nsc.strip()))
                continue
            # other
            self.response.out.write("%s: %s\r\n" % (header, resp.headers[header]))
            # check Content-Type
            if header.lower() == "content-type":
                if resp.headers[header].lower().find("text") == -1:
                    # not text
                    text_content = False
        self.response.out.write("\r\n")
        # only compress when Content-Type is text/xxx
        if text_content:
            self.response.out.write(zlib.compress(resp.content))
        else:
            self.response.out.write(resp.content)    def get(self):
        self.response.headers["Content-Type"] = "text/html; charset=utf-8"
        self.response.out.write( \
"""
<html>
    <head>
        <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
        <title>GAppProxy宸茬粡鍦ㄥ伐浣滀簡</title>
    </head>
    <body>
        <table width="800" border="0" align="center">
            <tr><td align="center"><hr></td></tr>
            <tr><td align="center">
                <b><h1>%s 宸茬粡鍦ㄥ伐浣滀簡</h1></b>
            </td></tr>
            <tr><td align="center"><hr></td></tr>            <tr><td align="center">
                GAppProxy鏄?竴涓?紑婧愮殑HTTP Proxy杞?欢,浣跨敤Python缂栧啓,杩愯?浜嶨oogle App Engine骞冲彴涓?
            </td></tr>
            <tr><td align="center"><hr></td></tr>            <tr><td align="center">
                鏇村?鐩稿叧浠嬬粛,璇峰弬鑰?a href="GAppProxy">http://gappproxy.googlecode.com/">GAppProxy椤圭洰涓婚〉</a>.
            </td></tr>
            <tr><td align="center"><hr></td></tr>            <tr><td align="center">
                <img src="http://code.google.com/appengine/images/appengine-silver-120x30.gif" alt="Powered by Google App Engine" />
            </td></tr>
            <tr><td align="center"><hr></td></tr>
        </table>
    </body>
</html>
""" % self.Software)def main():
    application = webapp.WSGIApplication([("/fetch.py", MainHandler)])
    wsgiref.handlers.CGIHandler().run(application)if __name__ == "__main__":
    main()



common.py
Quote:

#! /usr/bin/env python
# coding=utf-8
#############################################################################
#                                                                           #
#   File: common.py                                                         #
#                                                                           #
#   Copyright (C) 2008-2009 Du XiaoGang <
[url=mailto:[email protected]][email protected][/url]>                    #
#                                                                           #
#   Home:
http://gappproxy.googlecode.com                                   #
#                                                                           #
#   This file is part of GAppProxy.                                         #
#                                                                           #
#   GAppProxy is free software: you can redistribute it and/or modify       #
#   it under the terms of the GNU General Public License as                 #
#   published by the Free Software Foundation, either version 3 of the      #
#   License, or (at your option) any later version.                         #
#                                                                           #
#   GAppProxy is distributed in the hope that it will be useful,            #
#   but WITHOUT ANY WARRANTY; without even the implied warranty of          #
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the           #
#   GNU General Public License for more details.                            #
#                                                                           #
#   You should have received a copy of the GNU General Public License       #
#   along with GAppProxy.  If not, see <
http://www.gnu.org/licenses/>.      #
#                                                                           #
#############################################################################

import os, sys
def we_are_frozen():
    """Returns whether we are frozen via py2exe.
    This will affect how we find out where we are located."""

    return hasattr(sys, "frozen")
def module_path():
    """ This will get us the program's directory,
    even if we are frozen using py2exe"""

    if we_are_frozen():
        return os.path.dirname(sys.executable)
    return os.path.dirname(__file__)

dir = module_path()
VERSION = "1.2.0"
LOAD_BALANCE = 'http://gappproxy-center.appspot.com/available_fetchserver.py'
GOOGLE_PROXY = 'www.google.cn:80'

DEF_LISTEN_PORT = 8000
DEF_LOCAL_PROXY = ''
DEF_FETCH_SERVER = ''
DEF_CONF_FILE = os.path.join(dir, 'proxy.conf')
DEF_CERT_FILE = os.path.join(dir, 'ssl/LocalProxyServer.cert')
DEF_KEY_FILE  = os.path.join(dir, 'ssl/LocalProxyServer.key')

class GAppProxyError(Exception):
    def __init__(self, reason):
        self.reason = reason

    def __str__(self):
        return '<GAppProxy Error: %s>' % self.reason



[ 此帖被花香在2010-07-30 17:54重新编辑 ]
附件: fetchServer.rar (4 K) 下载次数:615
本帖最近评分记录:
  • 学分:+1(clowwindy) 原创内容
  • 君の知らない物語...
    顶端 底端 2010-07-30 05:36 | [楼 主]
    xsk120
    级别: 澄空三年生上学期

    精华: 0
    发帖: 1721
    学分: 2 点
    澄空币: 11 KID
    GPA: 0 点
    注册时间:2009-08-30
    最后登录:2014-09-16


     

    谢谢LZ分享,不知道有没有人能解决头像问题,坐等高手...
    顶端 底端 2010-07-30 07:43 | 1 楼
    histein
    我希望你找的是——我。真正的我
    级别: 澄空三年生上学期

    精华: 0
    发帖: 5387
    学分: 16 点
    澄空币: 1 KID
    GPA: 0 点
    注册时间:2009-06-07
    最后登录:2014-08-27


     

    花香叔,辛苦你为了翻墙弄这一大篇文章
    在香港上网不需要任何翻墙的路过

    顶端 底端 2010-07-30 08:34 | 2 楼
    萌太君
    可以萌的新番又少了。。
    级别: 澄空一年生上学期

    精华: 0
    发帖: 145
    学分: 0 点
    澄空币: 220 KID
    GPA: 0 点
    注册时间:2010-06-07
    最后登录:2013-02-17

     

    话说推推功能现在国内SNS都有附带的吧?特意翻墙可能面临语言不通的问题哦~
    顶端 底端 2010-07-30 09:14 | 3 楼
    好熊
    级别: 学生会干事

    精华: 0
    发帖: 2093
    学分: 129 点
    澄空币: 244 KID
    GPA: 0 点
    注册时间:2008-06-21
    最后登录:2014-09-16


     

    Quote:
    引用第3楼萌太君于2010-07-30 09:14发表的  :
    话说推推功能现在国内SNS都有附带的吧?特意翻墙可能面临语言不通的问题哦~

    目前中文圈活跃人士大约2万人
    顶端 底端 2010-07-30 09:21 | 4 楼
    ceuros
    级别: 澄空一年生下学期

    精华: 0
    发帖: 468
    学分: 0 点
    澄空币: 2 KID
    GPA: 0 点
    注册时间:2010-02-07
    最后登录:2014-06-29

     

    单纯地翻墙的话还是VPN方便,不过教育网出门还是比较有用的……

    亲妹妹不如干妹妹
    顶端 底端 2010-07-30 10:13 | 5 楼
    KON
    级别: 澄空一年生上学期

    精华: 0
    发帖: 43
    学分: 0 点
    澄空币: 707 KID
    GPA: 0 点
    注册时间:2005-10-24
    最后登录:2014-07-26

     

    目前很多網站還是無法連接,不清楚原因

    而且,現在不需開翻牆,MF與G網似乎也能直接連上了.
    其它的有待測試.


    顶端 底端 2010-07-30 10:56 | 6 楼
    gank
    级别: 澄空一年生下学期

    精华: 0
    发帖: 178
    学分: 0 点
    澄空币: 325 KID
    GPA: 0 点
    注册时间:2008-07-22
    最后登录:2014-09-13

     

    按照LZ的教程操作,却什么网站都上不了,难道是我用路由的原因?
    顶端 底端 2010-07-30 12:54 | 7 楼
    董子
    级别: 澄空一年生下学期

    精华: 0
    发帖: 182
    学分: 7 点
    澄空币: 292 KID
    GPA: 0 点
    注册时间:2009-01-24
    最后登录:2013-07-30


     

    这·····在澄空发这样的帖子不大好吧·····小心整个论坛被wall掉····这样的可不是没有先例····
    GAppProxy在刚出来时我用过一段时间,
    但GoogleAppEngine的一些硬性限制仍然无法避免,如通过代理的单个最大文件大小为1MB,对含有javascript的一些网页跳转有时会出现一些莫名其妙的错误等等····https证书一直出错,我怎么更新文件都不行····
    有时IP还会因为使用人数太多而被某些网站当成恶意IP
    还有,由于文件大小限制,youtube也没法看··不过对FLASH的支持倒是正常的
    说实在的,我觉得那东西最有用的地方就是谷歌的IP····去哪个地方匿名发个容易引起争议的帖子,管理员一查,哇,这人居然是Google的,不敢惹不敢惹·····perfect!
    IP基本上上都会被定位到加州山景市谷歌公司·····
    还有,你的教程一点也不新手····比多数google讨论组里的写的要复杂许多····
    更简单的GUI版本早就出来了····
    嫌麻烦的各位直接下载我提供的这个程序,解压安装后运行里面的gui.exe,之后勾上use fetch server,填入我的:http://dzh124m.appspot.com/fetch.py
    然后firefox设置代理:菜单-工具-选项-高级-网络-连接-设置-手动配置代理-HTTP代理和端口分别填写127.0.0.1和8000-勾选为所有协议使用相同代理-确定(这句是抄上面的)
    就可以了
    密码:O*O*X*X*G*F*W,自己把星星去掉···
    http://u.115.com/file/t8fbe3ca61
    顶端 底端 2010-07-30 15:06 | 8 楼
    董子
    级别: 澄空一年生下学期

    精华: 0
    发帖: 182
    学分: 7 点
    澄空币: 292 KID
    GPA: 0 点
    注册时间:2009-01-24
    最后登录:2013-07-30


     

    顺便说下,运行gui.exe后,点server,(vista,7均需以管理员身份重新运行gui.exe)然后yes,之后那东西就在后台自动运行了,你可以把窗口关掉了,每次开机也是自动启动
    顶端 底端 2010-07-30 15:10 | 9 楼
    恒の风
    级别: 学生会会长

    精华: 2
    发帖: 10365
    学分: 423 点
    澄空币: 585 KID
    GPA: 0 点
    注册时间:2005-05-28
    最后登录:2014-09-16

     

    我是进来吐槽的,引用代码用code不要用quote
    人生很短,请多珍惜,不要浪费在无谓的事情上
    顶端 底端 2010-07-30 15:11 | 10 楼
    暁美ほむら
    级别: 澄空三年生上学期

    精华: 0
    发帖: 2754
    学分: 19 点
    澄空币: 664 KID
    GPA: 1 点
    注册时间:2010-04-26
    最后登录:2014-09-14


     

    Quote:
    引用第11楼恒の风于2010-07-30 15:11发表的  :
    我是进来吐槽的,引用代码用code不要用quote

    不知道为啥,我用code,代码看不到啊!
    君の知らない物語...
    顶端 底端 2010-07-30 15:37 | 11 楼
    暁美ほむら
    级别: 澄空三年生上学期

    精华: 0
    发帖: 2754
    学分: 19 点
    澄空币: 664 KID
    GPA: 1 点
    注册时间:2010-04-26
    最后登录:2014-09-14


     

    Quote:

    这·····在澄空发这样的帖子不大好吧·····小心整个论坛被wall掉····这样的可不是没有先例····
    GAppProxy在刚出来时我用过一段时间,
    但GoogleAppEngine的一些硬性限制仍然无法避免,如通过代理的单个最大文件大小为1MB,对含有javascript的一些网页跳转有时会出现一些莫名其妙的错误等等····https证书一直出错,我怎么更新文件都不行····
    有时IP还会因为使用人数太多而被某些网站当成恶意IP
    还有,由于文件大小限制,youtube也没法看··不过对FLASH的支持倒是正常的
    说实在的,我觉得那东西最有用的地方就是谷歌的IP····去哪个地方匿名发个容易引起争议的帖子,管理员一查,哇,这人居然是Google的,不敢惹不敢惹·····perfect!
    IP基本上上都会被定位到加州山景市谷歌公司·····
    还有,你的教程一点也不新手····比多数google讨论组里的写的要复杂许多····
    更简单的GUI版本早就出来了····
    嫌麻烦的各位直接下载我提供的这个程序,解压安装后运行里面的gui.exe,之后勾上use fetch server,填入我的:http://dzh124m.appspot.com/fetch.py
    然后firefox设置代理:菜单-工具-选项-高级-网络-连接-设置-手动配置代理-HTTP代理和端口分别填写127.0.0.1和8000-勾选为所有协议使用相同代理-确定(这句是抄上面的)
    就可以了

    嗯,我就是冲着google而去的,我这个版本,youtube可以正常观看,貌似那个gui的程序的仍然不能登录twitter,和观看youtube.
    [ 此帖被花香在2010-07-30 15:51重新编辑 ]
    本帖最近评分记录:
  • 澄空币:+10(clowwindy) 修正
  • 澄空币:-10(恒の风) 2Hit
  • 君の知らない物語...
    顶端 底端 2010-07-30 15:40 | 12 楼
    董子
    级别: 澄空一年生下学期

    精华: 0
    发帖: 182
    学分: 7 点
    澄空币: 292 KID
    GPA: 0 点
    注册时间:2009-01-24
    最后登录:2013-07-30


     

    Quote:
    引用第13楼花香于2010-07-30 15:40发表的  :

    嗯,我就是冲着google而去的,我这个版本,youtube可以正常观看,貌似那个gui的程序的仍然不能登录twitter,和观看youtube.



    那我可能遇上人品问题了·····
    我上推倒是正常····
    好吧,我一直懒得没更新服务端程序·····
    顶端 底端 2010-07-30 15:56 | 13 楼
    酷酷蛋蛋
    三次元に興味はないよ
    级别: 澄空新闻组

    精华: 0
    发帖: 213
    学分: 16 点
    澄空币: 1654 KID
    GPA: 0 点
    注册时间:2006-09-12
    最后登录:2014-07-20


     

    GAPP不是有支持twxttxr的软件版么 其实我不用的我不知道
    楓 「でもね、途中で気付いたんだ。お兄ちゃんも寂しいんだなって。人の温もりが本当に必要なのは、お兄ちゃんなんだって」

    楓 「すっごく悩んだんだけどね。決めたんだ」

    楓 「兄妹であんなことするのはおかしいけど、でもそんなに嫌じゃなかったから、よーし、やっぱり離れてやるもんかって」

    楓 「……ねぇ、お兄ちゃん」

    楓の指に少しだけ力が込められた。

    楓 「犯罪者だからって自暴自棄にならないで。自分で可能性を捨てないで。独りがいいなんて、言わないで」

    楓 「独りでも生きてはいけるけど……寂しいよ。一緒にがんばろうよ。私、ずっといるから」

    楓 「お兄ちゃんが正しい道を選んで、好きな人ができて、私がいらなくなるまで……、私はなにされてもお兄ちゃんの傍にいるから」

    ←澄空新闻组成员绝赞招募中~ 点击左图加入我们~
    顶端 底端 2010-07-30 16:12 | 14 楼
    fexis
    eden
    级别: 澄空二年生下学期

    精华: 0
    发帖: 1233
    学分: 0 点
    澄空币: 53 KID
    GPA: 0 点
    注册时间:2008-06-03
    最后登录:2014-09-09


     

    感谢LZ分享
    前段时间咱也用过GAP,还算好用,不过有时候GAP自己本身都会访问不了,然后就悲剧了
    不过又不想用那帮XXX搞的东西,经不住诱惑就花了50块钱买了一年的SSH
    现在用Chrome+Proxy Switchy!,看youtube的视频,720p以下的基本不用缓冲
    另外,如果只是为了上twitter的话,搭一个twitter的API代理也是个不错的选择
    顶端 底端 2010-07-30 16:15 | 15 楼
    mixyzai
    TOMOYO BEST
    级别: 澄空二年生下学期

    精华: 0
    发帖: 1151
    学分: 2 点
    澄空币: 4621 KID
    GPA: 0 点
    注册时间:2008-08-29
    最后登录:2014-08-30

     

    下载并安装 Python 和 Google App Engine SKD

    这两个软件是   appengine-java-sdk-1.3.5.zip  26.9M   和  GoogleAppEngine_1.3.5.msi   9.3M吗?
    顶端 底端 2010-07-30 16:44 | 16 楼
    暁美ほむら
    级别: 澄空三年生上学期

    精华: 0
    发帖: 2754
    学分: 19 点
    澄空币: 664 KID
    GPA: 1 点
    注册时间:2010-04-26
    最后登录:2014-09-14


     

    Quote:
    引用第17楼mixyzai于2010-07-30 16:44发表的  :
    下载并安装 Python 和 Google App Engine SKD

    这两个软件是   appengine-java-sdk-1.3.5.zip  26.9M   和  GoogleAppEngine_1.3.5.msi   9.3M吗?

    在我文章开头那里有下载,哦忘记放python了,不过python满大街都有下载
    [ 此帖被花香在2010-07-30 17:16重新编辑 ]
    君の知らない物語...
    顶端 底端 2010-07-30 17:02 | 17 楼
    mixyzai
    TOMOYO BEST
    级别: 澄空二年生下学期

    精华: 0
    发帖: 1151
    学分: 2 点
    澄空币: 4621 KID
    GPA: 0 点
    注册时间:2008-08-29
    最后登录:2014-08-30

     

    我用记事本打开刚解压的 fetchServer 文件夹内的 app.yaml 文件

    application: your_application_name
    version: 1
    runtime: python
    api_version: 1

    handlers:

    - url: /fetch.py
      script: fetch.py

    就是楼主tora里面下载的,我把your_application_name修改成我申请的ID上传失败啊。

    Python 安装的是skycn下载的Python 3.1

    还有我是XP系统。

    [ 此帖被mixyzai在2010-07-30 17:27重新编辑 ]
    顶端 底端 2010-07-30 17:22 | 18 楼
    恒の风
    级别: 学生会会长

    精华: 2
    发帖: 10365
    学分: 423 点
    澄空币: 585 KID
    GPA: 0 点
    注册时间:2005-05-28
    最后登录:2014-09-16

     

    Quote:
    引用第19楼mixyzai于2010-07-30 17:22发表的  :
    我用记事本打开刚解压的 fetchServer 文件夹内的 app.yaml 文件

    application: your_application_name
    version: 1
    runtime: python
    .......

    换成Python 2.6
    http://hi.baidu.com/konglingxuan/blog/item/3aee9095bc8a96037af48082.html
    人生很短,请多珍惜,不要浪费在无谓的事情上
    顶端 底端 2010-07-30 17:30 | 19 楼
    « 1 2» Pages: ( 1/2 total )
    帖子浏览记录
    『澄空学园』 » 【多媒体课堂】

    Total 0.031277(s) query 5, Time is now 09-16 17:24, Load is low.
    Powered by PHPWind v6.3.2 Code © 2003-08 PHPWind.com Corporation