Pyjamas event handling - pyjamas

I am having a difficult time understanding the Pyjamas/GWT event handling system. I am currently using the latest 0.8 Pyjamas in order to test. I am not sure of what the best event handling structure would be as I have never done GUI programming.
I haven't had much luck with the documentation I've found thus far. Does anyone know of a good reference for Pyjamas or GWT?
My main difficulty comes from understanding where listeners such as onClick, onMouseleave, etc are coming from? How are they triggered? Where are they defined? Do I define them?
What is the layered structure for the event handling system?
I know these are very general questions but I'm really just looking for a point in the right direction.
Thank you and I appreciate any help given.

I would suggest you study source in examples folder. Start with this http://pyjs.org/book/output/Bookreader.html#Getting%20Started
There are some links which has been helpful for me:
http://gwt.google.com/samples/Showcase/Showcase.html
http://pyjs.org/examples/
Also in examples folder there is a great example called showcase which gives you all API and some helpful code samples
/localhost/somedir/showcase/output/Showcase.html
since API is similar you can always check them books (especially helpful for understanding callback etc..) :
http://www.amazon.com/Beginning-Google-Web-Toolkit-Professional/dp/1430210311/ref=sr_1_12?ie=UTF8&qid=1334659695&sr=8-12
http://www.amazon.com/Google-Toolkit-Applications-Ryan-Dewsbury/dp/0321501969/ref=sr_1_7?ie=UTF8&qid=1334659695&sr=8-7
for Django and Pyjamas
http://www.derekschaefer.net/2011/02/08/pyjamas-django-pure-win/
However i agree there is a great need for better introduction tutorials beyond the hello world example. I'm struggling with it myself. Good luck
ps. I've created small callback example that seems to work. I would be greatfull if people correct me here and edit this example to be of more use for people. All im trying to do here is to have navigation with 2 pages (represented by 2 classes: Intro and Outro )
import pyjd
from pyjamas.ui.VerticalPanel import VerticalPanel
from pyjamas.ui.RootPanel import RootPanel
from pyjamas.ui.SimplePanel import SimplePanel
from pyjamas.ui.DockPanel import DockPanel
from pyjamas.ui.Hyperlink import Hyperlink
from pyjamas.ui.Button import Button
from pyjamas.ui.HTML import HTML
from pyjamas import Window
class Site(SimplePanel):
def onModuleLoad(self):
SimplePanel.__init__(self)
self.panel = DockPanel()
self.intro = Intro()
self.outro = Outro()
self.index = HTML('index')
self.curPage = self.index
vp=VerticalPanel()
vp.add(self.index)
self.link1 = Hyperlink('menu item 1')
self.link2 = Hyperlink('menu item 2')
self.link1.addClickListener(getattr(self, 'onLINK1'))
self.link2.addClickListener(getattr(self, 'onLINK2'))
self.panel.add(self.link1, DockPanel.WEST)
self.panel.add(self.link2, DockPanel.WEST)
self.panel.add(self.index, DockPanel.CENTER)
RootPanel().add(self.panel)
def onLINK1(self):
self.panel.remove(self.curPage, DockPanel.CENTER)
self.panel.add(self.intro, DockPanel.CENTER)
self.curPage = self.intro
def onLINK2(self):
self.panel.remove(self.curPage, DockPanel.CENTER)
self.panel.add(self.outro, DockPanel.CENTER)
self.curPage = self.outro
class Intro(SimplePanel):
def __init__(self):
SimplePanel.__init__(self)
self.vp = VerticalPanel()
self.html = HTML('This is intro')
self.button = Button('click me', self)
self.vp.add(self.html)
self.vp.add(self.button)
self.setWidget(self.vp)
def onClick(self):
Window.alert('onClick Intro')
class Outro(SimplePanel):
def __init__(self):
SimplePanel.__init__(self)
self.vp = VerticalPanel()
self.html = HTML('This is outro')
#we can do it this way
self.button1 = Button('click me1', getattr(self, 'onBUTTON1'))
self.button2 = Button('click me2')
#or set up listener
self.button2.addClickListener(getattr(self,'onBUTTON2'))
self.vp.add(self.html)
self.vp.add(self.button1)
self.vp.add(self.button2)
self.setWidget(self.vp)
def onBUTTON1(self):
Window.alert('hello from button1')
def onBUTTON2(self):
Window.alert('hello from button2')
if __name__ == '__main__':
pyjd.setup('./Site.html')
app = Site()
app.onModuleLoad()
pyjd.run()

Related

Python Scraper for Javascript?

Can anyone direct me to a good Python screen scraping library for javascript code (hopefully one with good documentation/tutorials)? I'd like to see what options are out there, but most of all the easiest to learn with fastest results... wondering if anyone had experience. I've heard some stuff about spidermonkey, but maybe there are better ones out there?
Specifically, I use BeautifulSoup and Mechanize to get to here, but need a way to open the javascript popup, submit data, and download/parse the results in the javascript popup.
Find Item
I'd like to implement this with Google App engine and Django. Thanks!
What I usually do is automate an actual browser in these cases, and grab the processed HTML from there.
Edit:
Here's an example of automating InternetExplorer to navigate to a URL and grab the title and location after the page loads.
from win32com.client import Dispatch
from ctypes import Structure, pointer, windll
from ctypes import c_int, c_long, c_uint
import win32con
import pywintypes
class POINT(Structure):
_fields_ = [('x', c_long),
('y', c_long)]
def __init__( self, x=0, y=0 ):
self.x = x
self.y = y
class MSG(Structure):
_fields_ = [('hwnd', c_int),
('message', c_uint),
('wParam', c_int),
('lParam', c_int),
('time', c_int),
('pt', POINT)]
def wait_until_ready(ie):
pMsg = pointer(MSG())
NULL = c_int(win32con.NULL)
while True:
while windll.user32.PeekMessageW(pMsg, NULL, 0, 0, win32con.PM_REMOVE) != 0:
windll.user32.TranslateMessage(pMsg)
windll.user32.DispatchMessageW(pMsg)
if ie.ReadyState == 4:
break
ie = Dispatch("InternetExplorer.Application")
ie.Visible = True
ie.Navigate("http://google.com/")
wait_until_ready(ie)
print "title:", ie.Document.Title
print "location:", ie.Document.location
I use the Python bindings to webkit to render basic JavaScript and Chickenfoot for more advanced interactions. See this webkit example for more info.
You can also use a "programatic web browser" named Spynner. I found this to be the best solution. Relatively easy to use.

Print Page content with Python Qt qWebEngine

I wrote a web-application which, if opened via browser, prints different divs based on div needed.
appl. backend is written in python.
All fine, it works, problem raised when the app have to run under Py Qt env.
from PySide2.QtWebEngineWidgets import QWebEngineView
from PySide2.QtWidgets import QApplication
app runs in this Way:
if __name__ == '__main__':
logging.debug("Starting server")
t = threading.Thread(target=index.run_server)
t.daemon = True
t.start()
time.sleep(2)
logging.debug("Server started")
app = QApplication()
view = QWebEngineView()
view.load("http://127.0.0.1:8080/lblUnit")
view.show()
sys.exit(app.exec_())
Problem is: when "Print" btn is hit, no print event is triggered.
Little search and i found that Qt documentation says window.print() method is not handled.
My question: Is there any Way to catch the Event or make it work? I saw some code doing it in C, but honestly it is A MESS.
I'm looking for any light solutions, if they exist of course.
Any advice is welcome :)

image classify and tensorflow serving

First of all sorry I am not prcise for this question but I am studying the tensorflow-serving and how to put in production my cnn. sincerely the documentation is quite confuse to me. I hope you can help to understand better the save model architecture. So please reply to me as teacher, i would like to know more about the whole flow.
I am developping a simple cnn to classify an image to 4 output.
I need tensorflow-serving to put it in production.
The image in input can be watherver size, the CNN should resize it first and predict.
Here the code
import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing.image import ImageDataGenerator
from matplotlib import pyplot as plt
from scipy.misc import toimage
from keras.models import Sequential
from keras.layers import *
from keras.optimizers import *
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants, signature_def_utils_impl
import cv2
#train_path='Garage/train'
#train_datagen = ImageDataGenerator(rescale=1./255)
#train_batch = train_datagen.flow_from_directory(train_path, target_size=(64,64), class_mode='categorical', batch_size=10, color_mode='grayscale')
#validation_datagen = ImageDataGenerator(rescale=1./255)
#validation_batch = validation_datagen.flow_from_directory(
# './Garage/validation',
# target_size=(64, 64),
# batch_size=3,
# class_mode='categorical', color_mode='grayscale')
model = Sequential()
model.add(InputLayer(input_shape=[64,64,1]))
model.add(Conv2D(filters=32,kernel_size=5,strides=1,padding='same',activation='relu'))
model.add(MaxPool2D(pool_size=5,padding='same'))
model.add(Conv2D(filters=50,kernel_size=5,strides=1,padding='same',activation='relu'))
model.add(MaxPool2D(pool_size=5,padding='same'))
model.add(Conv2D(filters=80,kernel_size=5,strides=1,padding='same',activation='relu'))
model.add(MaxPool2D(pool_size=5,padding='same'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512,activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(4,activation='softmax'))
optimizer=Adam(lr=1e-3)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
#model.fit_generator(
# train_batch,
# epochs=50,
# steps_per_epoch=6,
# validation_data=validation_batch,
# validation_steps=5)
model.load_weights('model.h5')
#score = model.evaluate_generator(validation_batch,steps=3)
#print('Test loss:', score[0])
#print('Test accuracy:', score[1])
#model.save('model.h5')
from PIL import Image
import requests
from io import BytesIO
response = requests.get('http://192.168.3.21:7451/shot.jpg')
image_pil = Image.open(BytesIO(response.content))
image = np.asarray(image_pil)
img2 = cv2.resize(image,(64,64))
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
img = np.reshape(img2,[1,64,64,1])
classes = model.predict_classes(img)
print(classes)
model_version="1"
sess = tf.Session()
#setting values for the sake of saving the model in the proper format
x = model.input
y = model.output
prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"inputs":x}, {"prediction":y})
valid_prediction_signature = tf.saved_model.signature_def_utils.is_valid_signature(prediction_signature)
if(valid_prediction_signature == False):
raise ValueError("Error: Prediction signature not valid!")
builder = saved_model_builder.SavedModelBuilder('./'+model_version)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
# Add the meta_graph and the variables to the builder
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature, },
legacy_init_op=legacy_init_op)
# save the graph
builder.save()
the code will take the picture from a cam http://192.168.3.21:7451/shot.jpg
and then it will predict it
When I compile the code it return a lot of errors when it try to save the model. can you please check it and tell me if the save model instructions are right?
I use x = model.input as input from the serving but I would like it take the picture as input from the server.
I am quite confuse actually, sorry.
The scope is when I request by gRPC to predict the image the model can give me the prediction result
Thanks
I tried to comment because I don't have a for sure answer for you but I didn't have enough space. Hopefully this info is helpful and can pass for an "answer".
Anyways, it's tough to say what the problems are for a Tensorflow newb like me without seeing any errors.
One thing I noticed is that the method call for predict_signature_def() seems to not follow the method signature I found at here.
Also, I don't think you want to do your image downloading/processing in the same code you have your model in. TFServe isn't supposed to run pre-post processing; Just host your models.
So what you can do is create something like a RESTful service that you accept the image at, then you run your preprocessing on it and send that processed image in as part of the request to TFServe. It looks something like this:
user+image
-> requests classification to RESTful service
-> REST API receives image
-> REST service resizes image
-> REST service makes classification request to TFS (where your model is)
-> TFS receives request, including resized/preprocessed image
-> TFS invokes classification using your model and the arguments you sent
-> TFS responds to REST service with model's response
-> REST service responds to user with the classification from your model
This sort of sucks because passing images around the network is inefficient and can be slow but you can optimize when you find pain points.
The main idea is that your models should be saved off into an artifact/binary that can run and doesn't need your code to run. This lets you separate out modeling from data pre and post processing and gives your models a more consistent place to run from; e.g. you don't need to worry about competing versions of dependencies for your model to run.
The downside is it can be a bit of a learning curve to get these pieces to fit nicely after breaking them out from the monolithic architecture it seems like you have.
So, until a real Tensorflow knower comes along with a real answer, I hope this helps a bit.

Scraping Meteor with Python

I want to scrap some data from site easydrop.ru. I don't have much experience, so, I faced with some problems.
Step 1: Firstly, I have tried to load and parce this page using requests.get from requests lib and parce it with lxml. That was a fail, because there are mostly JS generated content. So I had only a small part, mostly the information about the page.
Step 2: My second attempt was using PyQt application for JS generation:
import sys
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from PyQt4.QtWebKit import *
from lxml import html
# Take this class for granted.Just use result of rendering.
class Render(QWebPage):
def __init__(self, url):
self.app = QApplication(sys.argv)
QWebPage.__init__(self)
self.loadFinished.connect(self._loadFinished)
self.mainFrame().load(QUrl(url))
self.app.exec_()
def _loadFinished(self, result):
self.frame = self.mainFrame()
self.app.quit()
def getFinalHtml(url):
r = Render(url)
result = r.frame.toHtml()
final = html.fragment_fromstring(result, 'root')
return final
That was better, but still unsuccessfull - the page was fine now, However, the page does not contain any data (number of current users, items, etc)
This site uses the Meteor framework and values are getting by and the value obtained by communicating with them without reloading the page.
Step 3: At the moment I'm at a dead end and can not imagine how I can get the content of the page for further parsing :(
I have only 2 ideas how to get data from this pages:
1) Load pages in Python (extension of step 2) and parce them usign lxml (Unfortunately, i don't know how to improve the loader to get values from meteor)
2) Use meteor functions in python to load data from this site (users\items) without html
I have some ideas about second solution
I'm trying to do something with https://github.com/hharnisc/python-meteor lib. But I got into a dead end even when you try to start it:
I have a line from .js script on this site:
Meteor.startup(function (e) {
return function () {
return t(), Meteor.setInterval(t, 1e3), e.socket = io("https://ws.easydrop.ru", {transports: ["websocket"]})
}
So, I have tried some variants
client = MeteorClient('http://ws.easydrop.ru:3000/websocket')
client = MeteorClient('http://ws.easydrop.ru:3000/')
client = MeteorClient('http://ws.easydrop.ru:443/')
client = MeteorClient('http://ws.easydrop.ru:443/websocket/')
But i have an error in py:
raise ValueError("Invalid scheme: %s" % scheme)
ValueError: Invalid scheme: https
What should i do to work with it?
Selenium or phantomjs work quite well for this there is a handy guide here:
http://toddhayton.com/2015/02/03/scraping-with-python-selenium-and-phantomjs/

QWebKit - Run page's javascript function?

Basically I want to go into my Router's settings,
set a checkbutton and than call a function on the page,
all programmatically.
I know all the javascript to do this,
and can do it via the Google Chrome console.
I can syntactically perform it through QWebKit,
but the actual page remains unaffected.
import sys
from PyQt4 import QtCore, QtGui, QtWebKit
class Browser(QtWebKit.QWebView):
def __init__(self, parent = None):
super(Browser, self).__init__()
self.changepage()
def changepage(self):
self.load(QtCore.QUrl("http://192.168.0.1/adv_mac_filter.php"))
x1 = self.page().mainFrame()
x1.evaluateJavaScript("entry_enable_1.checked = true;")
x1.evaluateJavaScript("check();")
app = QtGui.QApplication(sys.argv)
x = Browser()
x.show()
sys.exit(app.exec_())
(Yes, this code requires that I am already logged into my routers settings)
I know the JavaScript is being run, as I can test it using Alerts.
However, the routers' html "checked()" function or checkbox aren't ran / changed.
It's like I'm not actually interacting with the page, but a copy.
Am I making a massive rookie mistake here?
Specs:
python 2.7
PyQt4
QWebKit
Windows 7
You should probably wait for the page to finish loading, by moving the javascript evaluation to a slot connected to the loadFinished() signal.
But as Pointy suggested, you could also try to send POST requests yourself, through QNetworkAccessManager, instead of using a widget.

Resources