Managing Wicket Serialization Problem On Google App Engine

Table of contents
Reading Time: 7 minutes

There are several problems of using Wicket on Google App Engine. We are porting a Wicket application on Google App Engine and faced several issues. This post will look at the problems we may encounter while working on a Wicket project on Google App engine and how may we overcome them.

Google App engine will it play suggests that if you have a wicket project you follow this workaround your Wicket project will start working on app engine. Still, one of the error you may encounter while working on app engine are Wicket Serialization errors.

Wicket is very powerful in maintaining application/session state. This enables for example a nice usage of back/forward buttons. In a typical Wicket scenario, this is backed by the DiskPageStore implementation. But on Google App Engine we can’t use the DiskPageStore as it relies on writing to the filesystem. Therefore HttpSessionStore is used in Wicket Application class. In this implementation Wicket components and associated model will be saved in Session and therefore in datastore.

This may result in the session getting bloated till we encounter this error com.google.apphosting.api.ApiProxy$RequestTooLargeException: The request to API call datastore_v3.Put() was too large on datastore.

The way to manage this problem is that we use a different pageStore implementation which uses google memcache instead. If memcache implementation is used then the data: that is wicket components and its models will not be associated with the Session and therefore will not end up in datastore.

Memcache also has a limit on the data size limit and you may get an exception if data to be entered in it is high "Caused by: com.google.apphosting.api.ApiProxy$RequestTooLargeException: The request to API call memcache.Set() was too large."

MemcachePageStore that we used has a MAX_PAGES_PER_MAP instance variable and using this we can avoid the "memcache.Set() was too large" errors. Let’s look at how we went about implementing it. We tell our Wicket Application class to use Memcache based page store instead of HttpSessionStore.

[sourcecode language=”java”]
public class EhourWebApplication extends AuthenticatedWebApplication {
. . .
@Override
protected ISessionStore newSessionStore() {
return new SecondLevelCacheSessionStore(this, new MemcachePageStore(3));
// return new HttpSessionStore(this);
}
. . .
}
[/sourcecode]

If you have a look at the EhourWebApplication code listing you will find that MemcachePageStore accepts maxPagesPerMap size parameter in a constructor and the value we set depends upon the application we are developing. Below is the actual MemcachePageStore implementation we found here.

[sourcecode language=”java”]
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.SortedMap;
import java.util.TreeMap;
import java.util.Map.Entry;
import java.util.concurrent.locks.ReentrantReadWriteLock;

import org.apache.wicket.IClusterable;
import org.apache.wicket.Page;
import org.apache.wicket.protocol.http.pagestore.AbstractPageStore;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.google.appengine.api.memcache.MemcacheService;
import com.google.appengine.api.memcache.MemcacheServiceFactory;

public class MemcachePageStore extends AbstractPageStore {

private static final Logger logger = LoggerFactory
.getLogger(MemcachePageStore.class);
private static final String PAGESTORE_MEMCACHE_KEY = "PAGESTORE_MEMCACHE_KEY";
private final int MAX_PAGES_PER_MAP;

private static final int NO_MAX_PAGES_PER_MAP = -99;

private MemcacheService memcache;

public MemcachePageStore() {
logger.debug("New Memcache Page Store, MAX_PAGES_PER_MAP is Unlimited");
MAX_PAGES_PER_MAP = MemcachePageStore.NO_MAX_PAGES_PER_MAP;
this.initMemcache();
}

public MemcachePageStore(final int maxPagesPerMap) {
if (logger.isDebugEnabled()) {
logger.debug("New Memcache Page Store, MAX_PAGES_PER_MAP is "
+ maxPagesPerMap);
}
MAX_PAGES_PER_MAP = maxPagesPerMap;
this.initMemcache();
}

public boolean containsPage(final String sessionId,
final String pageMapName, final int pageId, final int pageVersion) {
return getPageMapStore(sessionId).getPageStore(pageMapName)
.containsPage(pageId, pageVersion);
}

public void destroy() {
// nothing to do – PageStores will be destroyed with their sessions
}

public Page getPage(final String sessionId, final String pagemap,
final int id, final int versionNumber, final int ajaxVersionNumber) {
final SerializedPage sPage = getPageMapStore(sessionId).getPageStore(
pagemap).getPage(id, versionNumber, ajaxVersionNumber);
return sPage != null ? deserializePage(sPage.getData(), versionNumber)
: null;
}

public void pageAccessed(final String sessionId, final Page page) {
// do nothing
}

public void removePage(final String sessionId, final String pagemap,
final int id) {
PageMapStore pms = getPageMapStore(sessionId);
if (id == -1) {
if (logger.isDebugEnabled()) {
logger.debug("Remove page map: " + pagemap);
}
pms.removePageMap(pagemap);
} else {
if (logger.isDebugEnabled()) {
logger
.debug("Remove page: " + id + " from page map "
+ pagemap);
}
pms.getPageStore(pagemap).removePage(id);
}
putInMemcache(sessionId, pms);
}

public void storePage(final String sessionId, final Page page) {
List list = serializePage(page);
PageMapStore pms = getPageMapStore(sessionId);
PageStore ps = pms.getPageStore(page.getPageMapName());
ps.storePages(list);
putInMemcache(sessionId, pms);

if (logger.isDebugEnabled()) {
logger.debug("Store page: " + page.toString());
logger.debug(getPageMapStore(sessionId).getPageStore(
page.getPageMapName()).toString());

}
}

public void unbind(final String sessionId) {
memcache.delete(getPagestorePerSessionMemcacheKey(sessionId));
}

protected PageMapStore getPageMapStore(final String sessionId) {
PageMapStore store = (PageMapStore) memcache
.get(getPagestorePerSessionMemcacheKey(sessionId));
if (store == null) {
store = new PageMapStore(MAX_PAGES_PER_MAP);
putInMemcache(sessionId, store);
if (logger.isDebugEnabled())
logger.debug("No Pagestore for sessionId " + sessionId
+ " found. Created a new one.");
}
return store;
}

private void initMemcache() {
if (logger.isDebugEnabled())
logger.debug("Initializing Memcache");
try {
memcache = MemcacheServiceFactory.getMemcacheService();
} catch (Exception e) {
logger.error(
"Exception occured when trying to initialize Memcache", e);
memcache = null;
}
}

private String getPagestorePerSessionMemcacheKey(final String sessionId) {
return PAGESTORE_MEMCACHE_KEY + sessionId;
}

private void putInMemcache(final String sessionId, PageMapStore pms) {
memcache.put(getPagestorePerSessionMemcacheKey(sessionId), pms);
}

protected static class PageMapStore implements IClusterable {
private static final long serialVersionUID = 1L;

private final Map _pageMaps = new HashMap();
private final ReentrantReadWriteLock _pageMapsLock = new ReentrantReadWriteLock();
private final int MAX_PAGES_PER_MAP;

public PageMapStore(final int maxNumPagesPerMap) {
MAX_PAGES_PER_MAP = maxNumPagesPerMap;
}

public PageStore getPageStore(final String pageMapName) {
_pageMapsLock.readLock().lock();
PageStore toReturn;
try {
toReturn = _pageMaps.get(pageMapName);
} finally {
_pageMapsLock.readLock().unlock();
}

if (toReturn == null) {
/*
* create a new PageStore, note that another thread might have
* added one while no lock was held
*/
_pageMapsLock.writeLock().lock();
try {
final PageStore old = _pageMaps.put(pageMapName,
toReturn = new PageStore(MAX_PAGES_PER_MAP));
if (old != null) {
// already exists, revert and use existing
toReturn = old;
_pageMaps.put(pageMapName, toReturn);
}
} finally {
_pageMapsLock.writeLock().unlock();
}
}
return toReturn;
}

public void removePageMap(final String pagemap) {
_pageMapsLock.writeLock().lock();
try {
_pageMaps.remove(pagemap);
} finally {
_pageMapsLock.writeLock().unlock();
}
}

@Override
public String toString() {
final StringBuilder sb = new StringBuilder();
for (final Entry entry : _pageMaps.entrySet()) {
sb.append("PageMap: ").append(entry.getKey()).append("n");
sb.append(entry.getValue().toString());
}
return sb.toString();
}
}

protected static class PageStore implements IClusterable {
private static final long serialVersionUID = 1L;

private final ReentrantReadWriteLock _pagesLock = new ReentrantReadWriteLock();
private final LinkedHashMap
_pages = new LinkedHashMap
();
private final TreeMap
_pageKeys = new TreeMap
();
// if we have an overflow, we probably had a 100.000 years uptime,
// hooray! 🙂
private Integer _id = Integer.MIN_VALUE;

private final int MAX_SIZE;

public PageStore(final int maxSize) {
MAX_SIZE = maxSize;
}

public void storePages(final List pagesToAdd) {
_pagesLock.writeLock().lock();
try {
// reduce size of page store to within set size if required
if (MAX_SIZE != NO_MAX_PAGES_PER_MAP) {
int numToRemove = _pages.size() + pagesToAdd.size()
– MAX_SIZE;
if (numToRemove > 0) {
final Iterator> iter = _pages
.entrySet().iterator();
while (iter.hasNext() && numToRemove > 0) {
final Entry
entry = iter
.next();
iter.remove();
_pageKeys.remove(entry.getKey());
numToRemove–;
}
}
}
for (final SerializedPage sPage : pagesToAdd) {
final PageKey pageKey = new PageKey(sPage.getPageId(),
sPage.getVersionNumber(), sPage
.getAjaxVersionNumber());
// remove to preserve access order
_pages.remove(pageKey);
_pages.put(pageKey, sPage);
_pageKeys.put(pageKey, _id++);
}

} finally {
_pagesLock.writeLock().unlock();
}

}

public boolean containsPage(final int pageId, final int pageVersion) {
_pagesLock.readLock().lock();
try {
// make PageKeys for below and above this version
// this id and version but -1 for ajax
final PageKey below = new PageKey(pageId, pageVersion, -1);
// this id and version +1, -1 for ajax
final PageKey above = new PageKey(pageId, pageVersion + 1, -1);

final SortedMap
thisPageAndVersion = _pageKeys
.subMap(below, above);

return !thisPageAndVersion.isEmpty();

} finally {
_pagesLock.readLock().unlock();
}
}

public SerializedPage getPage(final int id, final int versionNumber,
final int ajaxVersionNumber) {
_pagesLock.readLock().lock();
try {
SerializedPage sPage = null;
// just find the exact page version
if (versionNumber != -1 && ajaxVersionNumber != -1) {
sPage = _pages.get(new PageKey(id, versionNumber,
ajaxVersionNumber));
}
// we need to find last recently stored page window – that is
// page
// at the end of the list
else if (versionNumber == -1) {
final PageKey fromKey = new PageKey(id, -1, -1);
final PageKey toKey = new PageKey(id + 1, -1, -1);

final Iterator> iter = _pageKeys
.subMap(fromKey, toKey).entrySet().iterator();
int max = -1;
PageKey maxPageKey = null;
while (iter.hasNext()) {
final Entry
entry = iter.next();
if (entry.getValue() > max) {
max = entry.getValue();
maxPageKey = entry.getKey();
}
}
if (maxPageKey != null) {
sPage = _pages.get(maxPageKey);
}
}
// we need to find index with highest ajax version
else if (ajaxVersionNumber == -1) {
// make a page key which will be straight after wanted
// PageKey, ie, version number is one after this one, ajax
// version
// number is -1, page id is the same
final PageKey toElement = new PageKey(id,
versionNumber + 1, -1);
final SortedMap
posiblePageKeys = _pageKeys
.headMap(toElement);
if (posiblePageKeys.size() > 0) {
sPage = _pages.get(posiblePageKeys.lastKey());
}

}
return sPage;
} finally {
_pagesLock.readLock().unlock();
}
}

public void removePage(final int id) {
_pagesLock.writeLock().lock();
try {
final Iterator> iter = _pages
.entrySet().iterator();
while (iter.hasNext()) {
final PageKey pKey = iter.next().getKey();
if (id == pKey.getId()) {
iter.remove();
_pageKeys.remove(pKey);
}
}
} finally {
_pagesLock.writeLock().unlock();
}
}

@Override
public String toString() {
final StringBuilder sb = new StringBuilder();
final Iterator> iter = _pages
.entrySet().iterator();
while (iter.hasNext()) {
final Entry
entry = iter.next();
sb.append("t").append(entry.getKey().toString()).append("n");
}
if (logger.isTraceEnabled()) {
sb.append("tPageKeys TreeSet: ").append(_pageKeys.toString());
}
return sb.toString();
}

}

protected static class PageKey implements IClusterable, Comparable
{

private static final long serialVersionUID = 1L;

private final int _id;
private final int _versionNumber;
private final int _ajaxVersionNumber;

public PageKey(final int id, final int versionNumber,
final int ajaxVersionNumber) {
_id = id;
_versionNumber = versionNumber;
_ajaxVersionNumber = ajaxVersionNumber;
}

public int getId() {
return _id;
}

public int getVersionNumber() {
return _versionNumber;
}

public int getAjaxVersionNumber() {
return _ajaxVersionNumber;
}

@Override
public int hashCode() {
final int prime = 31;
int result = prime + _ajaxVersionNumber;
result = prime * result + _id;
result = prime * result + _versionNumber;
return result;
}

@Override
public boolean equals(final Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (obj instanceof PageKey == false) {
return false;
}
final PageKey other = (PageKey) obj;
if (_ajaxVersionNumber != other._ajaxVersionNumber) {
return false;
}
if (_id != other._id) {
return false;
}
if (_versionNumber != other._versionNumber) {
return false;
}
return true;
}

@Override
public String toString() {
return "PageID: " + _id + " tVersion: " + _versionNumber
+ " tAjax: " + _ajaxVersionNumber;
}

public int compareTo(final PageKey o) {
if (_id != o.getId()) {
return _id – o.getId();
} else if (_versionNumber != o.getVersionNumber()) {
return _versionNumber – o.getVersionNumber();
} else if (_ajaxVersionNumber != o.getAjaxVersionNumber()) {
return _ajaxVersionNumber – o.getAjaxVersionNumber();
}
return 0;
}

}
}
[/sourcecode]

However, the implementation suggested is still not complete. After the suggested changes you will get stack traces like "java.security.AccessControlException: access denied (java.io.SerializablePermission enableSubclassImplementation)".

This error arises because Google App engine does not allow Input and Output streams to be sub-classed as it is a potential security risk. Have a look at the issue logged on this here. Till this issue will be resolved we again have to look for a workaround.

Taking a clue from the discussion here we found that Wicket does provide a Objects class which has static setters for the ObjectStreamFactory. We can provide our own implementation of ObjectStreamFactory such that we do not subclass the Input and Output streams.

We created a GAEObjectStreamFactory let’s look at the code listing

[sourcecode language=”java”]
import java.io.IOException;
import java.io.InputStream;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.OutputStream;

import org.apache.wicket.util.io.IObjectStreamFactory;

public class GAEObjectStreamFactory implements IObjectStreamFactory {

public ObjectInputStream newObjectInputStream(InputStream in) throws IOException {
return new ObjectInputStream(in);
}

public ObjectOutputStream newObjectOutputStream(OutputStream out) throws IOException {
return new ObjectOutputStream(out);
}
}
[/sourcecode]

and now we can place this GAEObjectStreamFactory in Objects static setter in our wicket application class init() method.

[sourcecode language=”java”]
public class EhourWebApplication extends AuthenticatedWebApplication {
. . .
public void init() {
Objects.setObjectStreamFactory(new GAEObjectStreamFactory());
}
. . .
}
[/sourcecode]

That is all we need to do to use a google memcache based pagestore implementation. Wicket is a nice framework but it has Serialization issues when working on Google App engine. Our application which uses wicket with JPA and Spring is stable now after these changes. Hopefully, this issue get resolved soon so that we do not have to deal with the workaround.

1 thought on “Managing Wicket Serialization Problem On Google App Engine10 min read

  1. Any thoughts on porting this to Wicket 1.5.2? AbstractPageStore has been removed, so MemcachePagestore will have to implement IPageStore or possibly extend DefaultPageStore and override some of its functionality.

    Wicket’s Objects class also has the setObjectStreamFactory method removed.

Comments are closed.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading